20090031175 | SYSTEM AND METHOD FOR ANALYZING STREAMS AND COUNTING STREAM ITEMS ON MULTI-CORE PROCESSORS | 2009-01-29 | Aggarwal et al. | 714/47 |
5950185 | Apparatus and method for approximating frequency moments | 1999-09-07 | Alon et al. | 707/758 |
5758338 | Method for characterizing information in data sets using multifractals | 1998-05-26 | Faloutsos et al. | 1/1 |
This invention discloses continuous functional monitoring of distributed network activity using algorithms based on frequency moment calculations.
Functional monitoring problems are fundamental in distributed systems, in particular sensor networks, where minimization of communication is necessary. Functional monitoring also concerns problems in communication complexity, communication theory, and signal processing.
In traditional sensor systems such as smart homes and elsewhere, security sensors are carefully laid out and configured, and there is a convenient power source. The straightforward way to monitor a phenomenon is to take measurements every few time instants, send them to a central site, and use back-end systems to analyze the entire data trace.
In contrast, modern sensor networks, addressed in this invention, are more ad hoc and mobile. A modern sensor network may be distributed arbitrarily, operate on battery power, and have expensive bandwidth costs (e.g., via wireless communication). A battery operated device needs to conserve their power for long use between charging periods. Further, these sensors have some memory and computing power. Hence the sensors can perform local computations and be more careful in usage of radio for communication, since radio use is the biggest source of battery drain. In this scenario, collecting all the data from sensors to correctly calculate a function in the back-end is wasteful, and a direct approach is to design protocols which will trigger an alarm when a threshold is exceeded, and the emphasis is on minimizing the communication during the battery lifetime.
Moreover, even in a hard wired (i.e., not wireless) environment, there is a bandwidth cost to transmitting data, and minimization of communication of purely overhead functions is a generally desirable feature.
In this context, variations of functional monitoring have been proposed as “reactive monitoring” (in networking, see M. Dilman and D. Raz, “Efficient reactive monitoring,” IEEE Infocom, 2001), and “distributed triggers” (in databases, see G. Cormode and M. Garofalakis, “Sketching streams through the net: Distributed approximate query tracking,” Intl. Conf. Very Large Data Bases, 2005; G. Cormode, S. Muthukrishnan, and W. Zhuang, “What's different: Distributed, continuous monitoring of duplicate resilient aggregates on data streams,” Intl. Conf. on Data Engineering, 2006; and G. Comiode, S. Muthukrishnan, and W. Zhuang, “Conquering the divide: Continuous clustering of distributed data streams,” Intl. Conf. on Data Engineering, 2007).
Prior work has considered many different functions, and typically presents algorithms with correctness guarantees, but no nontrivial communication bounds. Some of the above work takes a distributed streaming approach where in addition to optimizing the bits communicated, the algorithms also attempt to optimize the space and time requirements of each of the sensors.
This invention provides a method for continuous distributed monitoring of computer network activity, focusing on frequency moments, given by formula (I).
F_{p}=Σ_{i}m_{i}^{p} (I)
where F_{p }is frequency moment of order p, and m_{i }is the frequency of item i from all sits.
Estimating the frequency moments has become the keystone problem in streaming algorithms since the seminal paper of Alon et al. (N. Alon, Y. Matias, and M. Szegedy. “The space complexity of approximating the frequency moments,” Journal of Computer and System Sciences, 58:137-147, 1999). In particular, the first three frequency moments (p=0, 1, 2) are useful in this invention. F_{1 }is a simple summation of all elements. F_{0 }corresponds to the number of distinct elements, and F_{2 }is based on the square of the number of elements. All three have applications to a wide variety of monitoring situations in order to test when a certain value passes a critical threshold, such as system load in a distributed system.
In one aspect of this invention, network devices are programmed to report a particular network function to a network manager (i.e., a person), where the decision to transmit the report is based on a frequency moment calculation performed locally on the reporting device. By careful selection of the parameters of the calculation, a minimum amount of data can be reported that provides a pre-selected degree of timeliness and accuracy to the network manager. The transmission of a report to a person alerts that person to a situation on the network, for example, a certain percentage of network errors. On being alerted, a manager can take, for example, remedial steps to correct a problem or otherwise address the situation, which if left unattended, could cause a deterioration in network conditions, in a set of circumstances where human intervention is required.
In another aspect of this invention, frequency moment calculations are employed to report network statistics, such as how many packets are routed, where the packets originate geographically, where they are addressed geographically, or how many malformed packets have been transmitted. For any such statistical parameter, the decision to make a report is based on frequency moment calculations performed on a local device, such as a router or server.
In another aspect of this invention, the reports from local devices, computed with a frequency moment calculation, are transmitted to a network manager, which can make a decision on a course of action. The network manager can be a server which makes an automated decision, for example to bypass a malfunctioning router. Alternatively, a report can be made to a work station where a person can make manual changes.
In an embodiment of this invention, a method for continuous distributed monitoring of computer network activity is provided, with a computer network including a central coordinator computer and a set of distributed remote devices, wherein the central coordinator computer monitors and reports on network activity; selecting a network activity of interest; programming remote devices to report on the selected activity according to a frequency moment calculation, as noted above. In some embodiments, p≧1 and the frequency moment algorithm proceeds in two or more rounds. In further embodiments, each remote device monitors a function of the selected network activity, and sends a bit to a central coordinator when the value of the function increases above a pre-determined threshold. In a related aspect, each remote device monitors a function of a device connected to the network, and sends a bit to a central coordinator when the value of the function increases above a pre-determined threshold.
In another aspect of this invention, the frequency moment algorithm proceeds in two or more rounds, where each remote device monitors a function selected from the selected network activity and a function of a device connected to the network; and each remote device sends a bit to a central coordinator when the value of the function increases above a pre-determined threshold, and the coordinator completes a round after receiving a pre-determined number of bits from the set of remote devices, and the coordinator collects information from all remote devices at the end of each round, where said information summarizes the data received at each remote device, and the summary information is in the form of a sum or sketch of data values, and where the coordinator determines that a global threshold has been reached based on a combination of summaries.
In another aspect of this invention, F_{1 }is monitored, where the frequency moment calculation proceeds in a single round, and where each remote device waits until it receives a pre-determined number of elements and then simulates the tossing of a biased coin, with true randomness or with a pseudo-random number generator, and where the device sends a bit to the coordinator if the result of the coin toss is heads; and where the coordinator determines that a global threshold has been reached after receiving a pre-determined quantity of bits from the remote devices.
Where the frequency moment is F_{0}, the frequency moment calculation may proceed in a single round. In such a case, each remote device randomly selects one of two hash functions f or g, and each device evaluates the selected hash function based on data received on the selected network activity, and the second hash function is evaluated only if certain criteria are met in the first hash function; and where, if an item with the same hash value has not already been observed by the remote site, then that hash value is sent to the coordinator; and the central coordinator reports that a global threshold has been reached when the number of distinct hash values received exceeds a pre-determined number.
Where the frequency moment is F_{2}, the algorithm may proceed in two phases of rounds, which are in turn divided into sub-rounds. In this case, the remote devices and coordinator use sketch algorithms to estimate the current L_{2 }norm of vectors to varying levels of accuracy, and where each round uses a pre-determined threshold so that each device sends a bit to the coordinator when its local updates during the current round have an L_{2 }noun which exceeds this threshold; and
In another aspect involving the frequency moment is F_{2}, the frequency moment calculation proceeds in a two phases of rounds, where F_{2 }does not exceed a certain fraction of the global threshold at the completion of the first phase, and where, during the second phase, F_{2 }is monitored until it is within a certain range of the global threshold.
In another aspect involving the frequency moment is F_{2}, the algorithm employs two phases of rounds. In the first phase, there is one sub-round per round, and the coordinator collects sketches from each device with a communication cost based on the number of devices;
In the second phase, the coordinator collects sketches from remote sites with a communication cost based on the number of remote devices divided by an error factor; and where
This invention further discloses a method for raising an alarm in a computer network with a set of remote reporting devices and a coordinator server, wherein the coordinator server has an initial output of 0, with continuous distributed monitoring of a function on the network or a function at a remote device. The continuous distributed monitoring comprises:
The method of raising an alarm in the aforementioned paragraph may further constitute an alarm that alerts a person to a situation on the network, or alternatively, the alarm may alert an automated process to a situation on the network.
As described herein, continuous distributed functional monitoring problems are “(k, ƒ, τ, ε)” problems, where k represents the number of players, ƒ is a function, τ is a threshold, and ε is an error factor. In the broadest sense, a (k, ƒ, τ, ε) problem is designed to change its output, such as raising an alarm, when a threshold τ is reached, where the players are observed continually and in real time.
In this invention, (k, ƒ, τ, ε) problems can be used to supervise and monitor computer networks, and generate reports in real time based on a pre-selected network function. An important feature in network supervision, monitoring, and control is balancing the accuracy of network reports, the timeliness of the reports, and the bandwidth usage required to make sufficient reports.
The purpose of providing real time reports is to make rapid changes to correct problems or fine tune network performance in real time, to minimize network slowdowns or stoppages, and increase performance. For example, if an excess load is detected of traffic entering a network, such as at a rush hour, additional devices can brought online to handle the load, or lower priority activities can be stopped to handle higher priority traffic.
In this invention, aggregate network functions are observed that are amenable to statistical analysis, such as network load, origin or destination of packets, and error rates. As such, a certain amount of error in the accuracy of reports can be tolerated. Thus, in an aspect of this invention, a pre-determined error factor can be employed, such as a 1% or a 10% error rate, within which errors are acceptable.
In any network reporting function, minimization of bandwidth is an important objective. Any reporting function can be considered an overhead activity, so the object of a reporting activity is to transmit the minimum amount of information necessary to make reports that meet the pre-determined parameters of accuracy and timeliness. Minimizing bandwidth is especially desirable in wireless or battery powered devices, where transmission of data consumes power and contributes to depletion of batteries.
As an illustration of the parameters of this invention, consider a simple case where there are two observers, Alice and Bob, who watch goods entering or leaving a warehouse through separate doors, and a manager, Carol. Alice and Bob do not speak with each other, but each observer has a two way communication channel with Carol. The objective of this system is to design a system to minimize the communication of each observer with Carol, while at the same time providing Carol with real time and accurate information on the flow of goods in an out of the warehouse. Mathematically, this can be expressed as |C(t)|=|A(t)|+|B(t)|, where t is time, and C(t) is a monitoring function. If b_{A}(t) is the total number of bits sent from Alice to Carol, and b_{B}(t) is the total number of bits sent from Bob to Carol, then the goal is to minimize b(t), where bt=b_{A}(t)+b_{B}(t).
In the most trivial case, Alice and Bob simply send a report (bit) every time an item enters or leaves the warehouse. In this case, bt=|A(t)|+|B(t). Of greater interest is the more complex case, where given ε, Carol's task is to output 0 whenever C(t)≦(1−ε)τ, and to output 1 whenever C(t)>τ, for a threshold τ. Put differently, if the threshold is exceeded, an alarm is raised.
Several communication procedures in principle can achieve the goal of providing reports while minimizing communication between the observers and the manager (the manager is also referred to herein as a coordinator). A simple method is a coin toss, where, for example, Alice and Bob each flip a coin each time an item enters the warehouse and send Carol a report when the coin shows heads.
Another procedure is the “GLOBAL” method, where Alice and Bob know a rough estimate of Δ=τ−C(t′) from some prior time t′, and each observer sends a bit whenever the number of items they have observed exceeds Δ/2. Carol updates Alice and Bob with estimates when she gets a bit update and the new value of Δ is computed and used.
Another procedure is the “LOCAL” method, where Alice and Bob each create a model for arrival times of items and communicate the model parameters to Carol. The observers send bits to summarize differences when their current data significantly differs from their models.
This invention discloses functional monitoring problems generally in which there are k≧2 sites, and we wish to monitor C(t)=ƒ(A_{1}(t)∩ . . . ∩A_{k}(t)) where A_{i}(t) is the multiset of items collected at site i by time t, and ƒ is a monotonically nondecreasing function in time. There are two variants: threshold monitoring (determining when C(t) exceeds a threshold τ) and value monitoring (providing a good approximation to C(t) at all times t). Value monitoring directly solves threshold monitoring, and running O((1/ε) log T) instances of a threshold monitoring algorithm for thresholds τ=1, (1+ε), (1+ε)^{2}, . . . , T solves value monitoring with relative error 1+ε. Thus, the two variants differ by at most a factor of O(1/ε) log T). This disclosure will focus on threshold monitoring, which will be referred to as (k, ƒ, τ, ε) problems.
Thus, in one aspect, this invention provides a set of methods for monitoring particular functions of distributed data. For example, consider monitoring the number of malformed packets observed by a collection of routers in a large network, and wishing to raise an alert if the number of such packets exceeds some large quantity, say one million. This invention allows this to be monitored using an amount of communication which is much smaller than simply alerting a central monitor for every observed bad packet (very costly), while also avoiding periodic polling of routers for values (also costly, and potentially slow to respond). The communication cost of this monitoring is tightly bounded, while guaranteeing very high accuracy. In comparison to solutions to similar problems described in the literature, our solutions offer significantly less communication (up to an order of magnitude less and minimal computation power.
Accordingly, this invention is concerned with monitoring a function over a distributed set of computing devices and associated inputs. While the inventors have given solutions to such problems in the past, the methods and apparatus presented here apply to the same problems and present significant improvements in the cost of the monitoring. For example, consider a network of routers each observing their local traffic, where the network manager wishes to compute some function over the global traffic. Alternatively, consider a sensor network monitoring environmental conditions, such as stock in a warehouse or battlefield conditions. The function being monitored could simply be a sum of values observed, a count of the number of distinct objects observed globally, or the root-mean-square of a large number of values. Prior work (including work of that of the inventors here) has addressed these problems and given solutions which reduce the communication over the simple solution of pushing every single piece of information up to a centralized location.
In another aspect, this invention is applicable to situations where exact answers are unnecessary, such as reports of aggregate network performance or approximate error rates. In these types of reports, approximations with accuracy guarantees suffice. Thus, the functions of the invention have a built in error factor, ε. The use of a report of approximate data with an accuracy guarantee allows a tradeoff between accuracy and communication cost, i.e., bandwidth and processing resources required for the report.
In another aspect, this invention is useful for reporting on complex network functions. In the case of simple functions, periodic polling can often suffice. Thus, SNMP can poll traffic at a coarse granularity. However, a sampling method such as a periodic poll cannot effectively report on a holistic aggregate of data, such as data on network performance or error rates. An approach to reporting aggregate data is to carefully balance the period of polling with the communication cost of the report. Too infrequent polling will cause unnecessary delays in event observations. Too frequent polling has high communication costs, including high bandwidth usage. An additional problem with too frequent polling could lie in remote battery powered sensors that require battery power to send data, perhaps wirelessly. Overly frequent reports will deplete the batteries needlessly.
The methods of this invention address these concerns by intelligently reducing communications to the minimum bandwidth necessary to provide guaranteed error rates and guaranteed rapid response to events.
In signal processing, the emerging area of compressed sensing redefines the problem of signal acquisition as that of acquiring not the entire signal, but only the information needed to reconstruct the few salient coefficients using a suitable dictionary. These results can be extended to (k, ƒ, τ, ε) problems where the function is the salient coefficients needed to reconstruct the entire signal. See S. Muthukrishnan, “Some algorithmic problems and results in compressed sensing,” Allerton Conference, 2006. Further, the Muthukrishnan paper extended compressed sensing to functional compressed sensing where we need to only acquire information to evaluate specific functions of the input signal. Except for preliminary results in Muthukrishnan for quantiles, virtually no results are known for (k, ƒ, τ, ε) problems.
In computer science, there are communication complexity bounds that minimize the bits needed to compute a given function ƒ of inputs at any particular time over k parties. They do not, however, minimize the bits needed continuously over the entire time. These bounds are one-shot problems. The central issue in the continuous problems disclosed here is how often, and when, to repeat parts of such protocols over time to minimize the overall number of bits transferred.
The “streaming model” (see Alon et al., cited above) has received much attention in recent years. There are many functions ƒ that can be computed up to 1±ε accuracy in streaming model, using poly(1/ε, log n) space. This includes streaming algorithms for problems such as estimating frequency moments. There have been several works in the database community that consider the streaming model under the distributed setting, which is essentially the same as the model disclosed here. Subsequently several functional monitoring problems have been considered in this distributed streaming model, but the devised solutions typically are heuristics-based, the worst-case bounds are usually large and far from optimal. See G. Cormode and M. Garofalakis, “Sketching streams through the net: Distributed approximate query tracking,” Intl. Conf. Very Large Data Bases, 2005; G. Corrnode, M. Garofalakis, S. Muthukrishnan, and R. Rastogi, “Holistic aggregates in a networked world: Distributed tracking of approximate quantiles,” ACM SIGMOD Intl. Conf. Management of Data, 2005; G. Cormode, S. Muthukrishnan, and W. Zhuang, “Conquering the divide: Continuous clustering of distributed data streams,” Intl. Conf. on Data Engineering, 2007; and R. Keralapura, G. Cormode, and J. Ramamirtham, “Communication-efficient distributed monitoring of thresholded counts,” ACM SIGMOD Intl. Conf. Management of Data, 2006. In this disclosure, improved upper bounds for some basic functional monitoring problems are provided.
Accordingly, on one aspect, this invention provides a method for continuous distributed monitoring of computer network activity, focusing on frequency moments, given by formula (I).
F_{p}=Σ_{i}m_{i}^{p} (I)
where F_{p }is frequency moment of order p, and m_{i }is the frequency of item i from all sites.
Estimating the frequency moments has become the keystone problem in streaming algorithms since the seminal paper of Alon et al. (cited above). In particular, the first three frequency moments, where p=0, 1, or 2 are useful in this invention. Briefly, F_{1 }represents a simple summation of all elements, F_{0 }corresponds to the number of distinct elements, and F_{2 }is based on the square of the number of elements, and has found many applications such as surprise index, join sizes, etc.
Frequency moment calculations have previously been applied to analysis of data in databases, such as characteristics and distribution of data in large data sets. See, for example, Faloutsos et al., in U.S. Pat. No. 5,758,338, and Alon, et al. in U.S. Pat. No. 5,950,185.
Table 1 summarizes the results of bounds presented in this method. The method of the present invention employs the continuous bounds, particularly the upper bounds, since an objective of the instant invention is minimization of data transfer at the upper bound necessary to convey the necessary information with the smallest amount of data transfer. This method is mainly concerned with minimizing the communication cost of reporting aggregate network functions.
TABLE 1 | ||||
Summary of the communication complexity for one-shot and continuous threshold monitoring of | ||||
different frequency moments. The “randomized” bounds are expected communication bounds for | ||||
randomized algorithms with failure probability δ < ½ | ||||
Continuous | One-shot | |||
Moment | Lower bound | Upper bound | Lower bound | Upper bound |
F_{0}, randomized | Ω(k) | | Ω(k) | |
F_{1}, deterministic | | | | |
F_{1}, randomized | | | Ω(k) | |
F_{2}, randomized | Ω(k) | | Ω(k) | |
For the (k, F_{1}, τ, ε) problem, this method shows the deterministic bounds of O(k log 1/ε) and Ω(k log 1/εk)1; and randomized bounds of Ω(min{k, 1/ε}) and O(1/ε^{2 }log 1/δ), independent of k, where δ is the algorithm's probability of failure. Hence, randomization can give significant asymptotic improvement, and curiously, k is not an inherent factor. These bounds improve the previous result of O(K/ε log τ/k) in the paper by R. Keralapura, G. Cormode, and J. Ramamirtham, “Communication-efficient distributed monitoring of thresholded counts,” ACM SIGAIOD Intl. Conf. Management of Data, 2006.
For the (k, F_{0}, τ, ε) problem, this method shows a (randomized) upper bound of O(k/ε^{2}), which improves on the previous result of O(k^{2}/ε^{3 }log n log 1/δ), presented in the paper by G. Cormode, S. Muthukrishnan, and W. Zhuang “What's different: Distributed, continuous monitoring of duplicate resilient aggregates on data streams,” Intl. Conf. on Data Engineering, 2006. This method also gives a lower bound of Ω(k).
For the (k, F_{2}, τ, ε) problem, this method presents an upper bound of Õ(k^{2}/ε+(k^{−2}/ε)^{3}), improving on the previous result of Õ(k^{2}/ε^{4}) published by G. Cormode and M. Garothlakis, “Sketching streams through the net: Distributed approximate query tracking,” Intl. Conf. Very Large Data Bases, 2005. This method also gives a lower bound of Ω(k). The algorithm is a more sophisticated form of the “GLOBAL” algorithm (see above), with multiple rounds, using different “sketch summaries” at multiple levels of accuracy. The Õ notation suppresses logarithmic factors in n, k, m, t, 1/ε, and 1/δ.
Problem Formulation
Consider a sequence of elements A=(a_{1}, . . . , a_{m}), where a_{i}ε{1, . . . , n}. Let m_{i}=|{j:a_{j}=i}| be the number of occurrences of i in A, and define the p-th frequency moment of A as F_{p}(A)=Σ^{n}_{i=1}m^{p}_{i }for each p≧0. In the distributed setting, the sequence A is observed in order by k≧2 remote sites S_{1}, . . . , S_{k }collectively, i.e., the element a_{i }is observed by exactly one of the sites at time instance i. There is a designated coordinator that is responsible for deciding if F_{p}(A)≧τ for some given threshold τ. Determining this at a single time instant t yields a class of one-shot queries, but in this invention, the interest is in continuous monitoring (k, ƒ, τ, ε) queries, where the coordinator must correctly answer over the collection of elements observed thus far (A(t)), for all time instants t.
In the approximate version of these problems, for a parameter where 0<ε≦¼, the coordinator should output 1 to raise an alert if F_{p}(A(t))≧τ and output 0 if F_{p}(A(t))≦(1−ε)τ. If F_{p }is in between, the coordinator can answer either output, but will not change the output from the previous time t. Since the frequency moments never decrease as elements are received, the continuous-monitoring problem can also be interpreted as the problem of deciding a time instance t, at which point we raise an alarm, such that t_{1}≦t≦t_{2}, where t_{1}=arg min_{t}{F_{p}(A(t))>(1−ε)τ} and
t_{2}=arg min_{t}{F_{p}(A(t))≧τ}. The continuous algorithm terminates when such at is determined.
We assume that the remote sites know the values of τ, ε, and n in advance, but not m. The cost of an algorithm is measured by the number of bits that are communicated. We assume that the threshold τ is sufficiently large to simplify analysis and the bounds. Dealing with small τ's is mainly technical: we just need to carefully choose when to use the naive algorithm that simply sends every single element to the coordinator.
A simple observation implies that the continuous-monitoring problem is almost always as hard as the corresponding one-shot problem: for any monotone function ƒ, an algorithm for (k, j, τ, ε) functional monitoring that communicates g(k, n, m, τε) bits implies a one-shot algorithm that communicates g(k, n, m, τ, ε)+O(k) bits.
General Algorithm for F_{p }where p≧1
This is a general algorithm based on each site monitoring only local updates. The algorithm gives initial upper bounds, which we improve for specific cases in subsequent sections. Upper hounds are more important than lower hounds in this invention, since our goal is to minimize communication traffic at the upper bound of a given function.
The algorithm proceeds in multiple rounds, based on the generalized GLOBAL method, where the network manager updates the remote devices in real time with parameters on which the decision to make a report are based. Thus, whenever the coordinator receives a report, the remote devices are iteratively updated, changing the threshold required to make a report.
Let u_{i }be the frequency vector (m_{1}, . . . , m_{n}) at the beginning of round i. In round i, every site keeps a copy of u_{i }and a threshold t_{i}. Let v_{ij }be the frequency vector of recent updates received at site j during round i. Whenever the impact of v_{ij }causes the F_{p }moment locally to increase by more than t_{i }(or multiples thereof), the site informs the coordinator. After the coordinator has received more than k such indications, it ends the round, collects information about all k vectors v_{ij }from sites, computes a new global state u_{i}+1, and distributes it to all sites.
More precisely, the round threshold is defined as t_{1}=½ (τ−∥u_{i}∥_{p}^{p})k^{−p}, chosen to divide the current “slack” uniformly between sites. Each site j receives a set of updates during round i, which we represent as a vector v_{ij}. During round i, whenever └∥u_{i}+v_{ij}∥_{p}^{p}/t_{i}′ increases, site j sends a bit to indicate this (if this quantity increases by more than one, the site sends one bit for each increase). This formula means that ∥u_{i}+v_{ij}∥_{p}^{p}/t_{i }is rounded down to the nearest whole integer. Sending a bit only when ∥u_{i}+v_{ij}∥_{p}^{p}/t_{i }increases by a whole integer ensures the necessary accuracy with fewer messages sent from sites j to the coordinator than if a message was sent every time the referenced quantity changed. After the coordinator has received k bits in total, it ends round i and collects v_{ij }(or some compact summary of v_{ij}) from each site. It computes u_{i}+1=u_{i}+Σ^{k}_{j=1}v_{ij}, and hence t_{i}+1, and sends these to all sites, beginning round i+1. The coordinator changes its output to 1 when ∥u_{i}∥_{p}^{p}≧(1−ε/2)τ, and the algorithm terminates.
Consider the case where p=1. The upper bound is O(k log 1/ε) messages of counts being exchanged. In fact, we can give a tighter bound: the coordinator can omit the step of collecting the current v_{ij}'s from each site, and instead just sends a message to advance to the next stage. The value of t_{i }is computed simply as 2^{−1-i}τ/k, and the coordinator has to send only a constant number of bits to each site to signal the end of round i. Thus, we obtain a bound of O(k log 1/ε) bits. This an easier calculation than the scheme presented in R. Keralapura, G. Cormode, and J. Ramamirtham, “Communication-efficient distributed monitoring of thresholded counts,” ACM SIGMOD Intl. Coni Management of Data, 2006, which used an upper bound of O(k/ε log τ/κ).
Next, consider the case of p=2. In order to concisely convey information about the vectors v_{ij }we make use of “sketch summaries” of vectors. See Alon, et al., cited above. These sketches have the property that (with probability at least 1−δ) they allow F_{2 }of the summarized vector to be estimated with relative error ε, in O(1/ε^{2}) log τ log 1/δ) bits. We can apply these sketches in the above protocol for p=2, by replacing each instance of u_{i }and v_{ij }with a sketch of the corresponding vector. Note that we can easily perform the necessary arithmetic to form a sketch of u_{i}+v_{ij }and hence find (an estimate of) ∥u_{i}+v_{ij}∥_{2}^{2}. In order to account for the inaccuracy introduced by the approximate sketches, we must carefully set the error parameter ε′ of the sketches. Since we compare the change in ∥u_{i}+v_{ij}∥_{2}^{2 }to t_{i}, we need the error given by the sketch—which is ε′∥u_{i}+v_{ij}∥_{2}^{2}—to be at most a constant fraction of t_{i}, which can be as small as (εr)/2. Thus we need to set ε′=O(ε/k^{2}). Putting this all together gives the total communication cost of Õ(k^{6}/ε^{2}).
Randomized/Improved Bounds for F_{1 }
The simplest case is monitoring F_{1}, which is the sum of the total number of elements observed. As noted above, O(k log 1/ε) bits is a deterministic algorithm for monitoring F_{1}. Thus, any deterministic algorithm that solves (k, F_{1}, τ, ε) functional monitoring has to communicate Ω(k log (1/εk)) bits.
A randomized algorithm can be shown for (k, F_{1}, τ, ε) functional monitoring with error probability at most δ that communicates O((1/ε^{2}) log(1/δ)) bits. The algorithm is derived from a careful implementation of the coin toss procedure, with an error probability of ⅓. By running O(log 1/δ) independent instances and raising an alarm when at least half of the instances have raised alarms, we amplify to success probability 1−δ, as required. Every time a site receives ε^{2}τ(ck) elements, where c is some constant to be determined later, it sends a signal to the coordinator with probability 1/k. The server raises an alarm as soon as it has received c/ε^{2}−c/(2ε) such signals, and terminates the algorithm. Choosing c=96 makes both probabilities at most ⅙, as desired.
A randomized algorithm is better than a deterministic algorithm for a large enough ε. In addition, for any e<¼, any probabilistic protocol for (k, F_{1}, τ, ε) functional monitoring that errs with probability smaller than ½ has to communicate Ω(min {k, 1/ε}) bits in expectation.
Bounds for F_{0 }
We know that the F_{1 }problem can be solved deterministically and exactly, by setting ε=1/τ, and communicating O(k log τ) bits. For any p≠1, the same arguments of Proposition 3.7 and 3.8 in Alon et al. (cited above) apply to show that both randomness (Monte Carlo) and approximation are necessary for the F_{p }problem in order to get solutions with communication cost better than Ω(n) for any k≧2. So we only need to consider probabilistic protocols that err with some probability δ.
For monitoring F_{0}, we can generalize the sketch published by Z. Bar-Yossef, T. S. Jayram, R. Kumar, D. Sivakumar, and L. Trevisan, “Counting distinct elements in a data stream,” RANDOM, 2002, in a distributed fashion.
The basic idea is that, since the F_{0 }sketch changes “monotonically”, i.e., once an entry is added, it will never be removed, we can communicate to the coordinator every addition to all the sketches maintained by the individual sites. Thus, for any ε≦¼, n≧k^{2}, any probabilistic protocol for (k, F_{0}, τ, ε) functional monitoring that errs with probability smaller than ½ has to communicate Ω(k) bits in expectation.
In this model, there is a randomized algorithm for the (k, F_{0}, τ, ε) functional monitoring problem with error probability at most δ that communicates O(k(log n+(1/e^{2}) log (1/ε)) bits. An algorithm can be shown with an error probability of ⅓. This can be driven down to δ by running O(log 1/δ) independent copies of the algorithm.
If t is defined as the integer such that 48/ε^{2}≦τ/2^{t}<96/ε^{2}, the coordinator first picks two random pairwise independent hash functions f:[n]→[n] and g:→[6·(96/ε^{2})^{2}], and sends them to all remote sites. This incurs a communications cost of O(k(log n+log 1/ε))=O(k log n) bits. Next, each remote site evaluates ƒ(a_{i}) for every incoming element a_{i}, and tests of the last t bits of ƒ(a_{i}) are all zeros. If so, the remote site evaluates g(a_{i}). There is a local that contains all g( ) values for such elements. If g(a_{i}) is not in the buffer, we add g(a_{i}) to the buffer, and send it to the coordinator. The coordinator also keeps a buffer of all unique g( ) values received, and outputs 1 whenever the number of elements in the buffer exceeds (1−ε/2)τ/2^{t}. Since each g( ) value takes O(log 1/ε) bits, the bound in the theorem easily follows.
Bounds for F_{2 }
The F_{2 }monitoring algorithm has a communication cost of Õ(k^{2}/ε+k^{3/2}/e^{3}). This is an improvement over the bound from the prior art, reported in G. Cormode and M. Garofalakis, “Sketching streams through the net: Distributed approximate query tracking,” Intl. Conf. Very Large Data Bases, 2005.
F_{2 }presents a more complex situation than F_{0 }or F_{1}. The F_{2 }algorithm has two phases in this method. At the end of the first phase, we make sure that the F_{2 }is between ¾ τ and τ; while in the second phase, we more carefully monitor F_{2 }until it is in the range ((1−ε)τ, τ).
Each phase is divided into multiple rounds. In the second phase, each round is further divided into multiple sub-rounds to allow for more careful monitoring with minimal communication. We use sketches such that with probability at least 1−δ, they estimate F_{2 }of the sketched vector within 1±ε using O(1/ε^{2 }log n log 1/δ) bits. See Alon, cited above. Initially, assume that all sketch estimates are within their approximation guarantees. At a later stage, δ will be set to ensure only a small probability of failure over the entire computation.
Algorithm. We proceed in multiple rounds, which are in turn divided into subrounds. Let u_{i }be the frequency vector of the union of the streams at the beginning of the ith round, and û^{2}_{i }be an approximation of u^{2}_{i}. In round i, we use a local threshold t_{i}=(τ−û^{2}_{i})^{2}/64 k^{2 }τ. Let v_{ijl }be the local frequency vector of updates received at site j during subround l of round i, and let w_{il}=τ^{k}_{j=1 }v_{ijl }be the total increment of the frequency vectors in subround l of round i. During each (sub)round, each site j continuously monitors its v^{2}_{ijl}, and sends a bit to the server whenever [v^{2}_{ijl}/t_{i}] increases.
Phase one. In phase one, there is only one subround per round. At the beginning of round i, the server computes a 5/4 overestimate û^{2}_{i }of the current u^{2}_{i}, i.e., u^{2}_{i}≦û^{2}_{i}≦5/4 u^{2}_{i}. This can be done by collecting sketches from all sites with a communication cost of O(k log n). Initially û^{2}_{i}=u^{2}_{i}=0. When the server has received k bits in total from sites, it ends the round by computing a new estimate û^{2}_{i+1 }for û^{2}_{i+1}. If û^{2}_{i+1}≧15/16 τ, then we must have u^{2}_{i+1}≧û^{2}_{i+1}/(5/4)≧¾ τ, so we proceed to the second phase. Otherwise the server computes the new t_{i+1}, broadcasts it to all sites, and proceeds to the next round of phase one.
Phase two. In the second phase, the server computes a (1+ε/3)-overestimate û^{2}_{i }at the start of each round by collecting sketches from the sites with a communication cost of O(k/ε log n). The server keeps an upper bound û^{2}_{i,l }on u^{2}_{i,l}, the frequency vector at the beginning of the l-th subround in round i.
As above, during each sub-round, each site/continuously monitors its v^{2}_{ijl}, and sends a bit to the server whenever [v^{2}_{ijl}/t_{i}] increases. When the server has collected k bits in total, it ends the sub-round. Then, it asks each site j to send a (1±½)-approximate sketch for v^{2}_{ijl}. If û^{2}_{i,l+1}+3 k∥û^{2}_{i,t+1}∥√t_{i}<τ, then the server starts another sub-round, l+1. If not, then the round ends, and the server computes a new û^{2}_{i+1 }for u^{2}_{i+1}. If û^{2}_{i+1}≧(1−⅔ε)τ, the server changes its output to 1 and terminates the algorithm. Otherwise, it computes the new t_{i+1}, sends it to all sites, and starts the next round.
For functional monitoring problems (k, ƒ, τ, ε), this work had the surprising results that for some functions, the communication cost is close to or the same as the cost for one-time computation of ƒ, and that the cost can be less than the number of participants, k. Our results for F_{2 }make careful use of compact sketch summaries, switching between different levels of approximation quality to minimize the overall cost. These algorithms are more generally useful, since they immediately apply to monitoring L_{2 }and L_{2}^{2 }of arbitrary nonnegative vectors, which is at the heart of many practical computations such as join size, wavelet and histogram representations, geometric problems and so on. See G. Cormode and M. Garofalakis, “Sketching streams through the net: Distributed approximate query tracking,” Intl. Conf. Very Large Data Bases, 2005; and P. Indyk, “Algorithms for dynamic geometric, problems over data streams,” ACM Symp. Theory of Computing, 2004. Likewise, our F_{1 }techniques are applicable to continuously track quantiles and heavy hitters of time-varying distributions. See G. Cormode, M. Garofalakis, S. Muthukrishnan, and R. Rastogi, “Holistic aggregates in a networked world: Distributed tracking of approximate quantiles,” ACM SIGMOD Intl. Conf. Management of Data, 2005.