Parameters: |
|
---|
Instanciates a new event loop that is always distinct from the default loop. Unlike the default loop, it cannot handle Child watchers, and attempts to do so will raise an Error.
One common way to use libev with threads is indeed to create one Loop per thread, and use the default loop (from default_loop()) in the “main” or “initial” thread
See also
Parameters: | flags (int) – defaults to 0. See Loop.start() flags. |
---|
This method usually is called after you have initialised all your watchers and you want to start handling events.
Returns False if there are no more active watchers (which usually means “all jobs done” or “deadlock”), and True in all other cases (which usually means you should call start() again)
Parameters: | how (int) – defaults to EVBREAK_ONE. See Loop.stop() how. |
---|
Can be used to make a call to start() return early (but only after it has processed all outstanding events).
This method will simply invoke all pending watchers while resetting their pending state. Normally, the loop does this automatically when required, but when setting the callback attribute this call comes in handy.
This method sets a flag that causes subsequent loop iterations to reinitialise the kernel state for backends that have one. You can call it anytime, but it makes most sense after forking, in the child process. You must call it (or use EVFLAG_FORKCHECK) in the child before calling resume() or start(). Again, you have to call it on any loop that you want to re-use after a fork, even if you do not plan to use the loop in the parent.
On the other hand, you only need to call this method in the child process if and only if you want to use the event loop in the child. If you just fork()+exec() or create a new loop in the child, you don’t have to call it at all.
Returns the current “event loop time”, which is the time the event loop received events and started processing them. This timestamp does not change as long as callbacks are being processed, and this is also the base time used for relative timers. You can treat it as the timestamp of the event occurring (or more correctly, libev finding out about it).
Establishes the current time by querying the kernel, updating the time returned by now() in the progress. This is a costly operation and is usually done automatically within the loop. This method is rarely useful, but when some event callback runs for a very long time without entering the event loop, updating libev’s idea of the current time is a good idea.
See also
These two methods should be used when the loop is not used for a while and timeouts should not be processed. A typical use case would be an interactive program such as a game: when the user presses Control-z to suspend the game and resumes it an hour later it would be best to handle timeouts as if no time had actually passed while the program was suspended. This can be achieved by calling suspend() in your SIGTSTP handler, sending yourself a SIGSTOP and calling resume() directly afterwards to resume timer processing.
Effectively, all Timer watchers will be delayed by the time spent between suspend() and resume(), and all Periodic watchers will be rescheduled (that is, they will lose any events that would have occurred while suspended). After calling suspend() you must not call any method on the loop other than resume(), and you must not call resume() without a previous call to suspend(). Calling suspend()/resume() has the side effect of updating the event loop time (see update()).
unref()/ref() can be used to add or remove a reference count on the event loop: every watcher keeps one reference, and as long as the reference count is nonzero, the loop will not return on its own.
This is useful when you have a watcher that you never intend to unregister, but that nevertheless should not keep the loop from returning. In such a case, call unref() after starting, and ref() before stopping it. As an example, libev itself uses this for its internal signal pipe: it is not visible to the user and should not keep the loop from exiting if no event watchers registered by it are active. It is also good to do this for generic recurring timers or from within third-party libraries. Just remember to unref() after Watcher.start() and ref() before Watcher.stop() (but only if the watcher wasn’t active before, or was active before, respectively. Note also that libev might stop watchers itself (e.g. non-repeating timers) in which case you have to ref() in the callback).
Note
These two methods have nothing to do with Python reference counting.
This method only does something when EV_VERIFY support has been compiled in libev (which is the default for non-minimal builds). It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call abort(). This can be used to catch bugs inside libev itself: under normal circumstances, this method should never abort.
loop data.
The current invoke pending callback, its signature must be:
Parameters: | loop (Loop object) – this loop. |
---|
This overrides the invoke pending functionality of the loop: instead of invoking all pending watchers when there are any, the loop will call this callback instead (use invoke() if you want to invoke all pending watchers). This is useful, for example, when you want to invoke the actual watchers inside another context (another thread, etc.).
If you want to reset the callback, set it to None.
Warning
If the callback raises an error, pyev will stop the loop.
These two attributes influence the time that libev will spend waiting for events. Both time intervals are by default 0.0, meaning that libev will try to invoke Timer/Periodic callbacks and Io callbacks with minimum latency. Setting these to a higher value (the interval must be >= 0) allows libev to delay invocation of Io and Timer/Periodic callbacks to increase efficiency of loop iterations (or to increase power-saving opportunities). The idea is that sometimes your program runs just fast enough to handle one (or very few) event(s) per loop iteration. While this makes the program responsive, it also wastes a lot of CPU time to poll for new events, especially with backends like select which have a high overhead for the actual polling but can deliver many events at once.
By setting a higher io_interval you allow libev to spend more time collecting Io events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both Periodic and Timer) will not be affected. Setting this to a non-zero value will introduce an additional sleep() call into most loop iterations. The sleep time ensures that libev will not poll for Io events events more often than once per this interval, on average (as long as the host time resolution is good enough). Many (busy) programs can usually benefit by setting the io_interval to a value near 0.1 or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn’t make much sense to set it to a lower value than 0.01, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can’t increase the parallelism, then this setting will limit your transaction rate (if you need to poll once per transaction and the io_interval is 0.01, then you can’t do more than 100 transactions per second).
Likewise, by setting a higher timeout_interval you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). Io watchers will not be affected. Setting this to a non-zero value will not introduce any overhead in libev. Setting the timeout_interval can improve the opportunity for saving power, as the program will “bundle” timer callback invocations that are “near” in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake-ups is to use Periodic watchers and make sure they fire on, say, one-second boundaries only.
This affects the behaviour of the loop while executing all watcher callbacks (Watcher.callback and Scheduler.scheduler).
If False (the default), when a callback returns with an unhandled exception, the loop will print a warning and suppress the exception, in this configuration, the loop will only stop on fatal errors (memory allocation failure, EV_ERROR received, ...).
If True, the loop will stop on all errors (you do not want that if you write a server).
Read only
True if the loop is the default loop, False otherwise.
Read only
The number of pending watchers - 0 indicates that no watchers are pending.
Read only
The current iteration count for the loop, which is identical to the number of times libev did poll for new events. It starts at 0 and happily wraps around with enough iterations. This value can sometimes be useful as a generation counter of sorts (it “ticks” the number of loop iterations), as it roughly corresponds with Prepare and Check calls - and is incremented between the prepare and check phases.
Read only
The number of times start() was entered minus the number of times start() was exited normally, in other words, the recursion depth. Outside start(), this number is 0. In a callback, this number is 1, unless start() was invoked recursively (or from another thread), in which case it is higher.
The default flags value.
If this flag bit is or’ed into the flags value (or the program runs setuid() or setgid()) then libev will not look at the environment variable LIBEV_FLAGS. Otherwise (the default), LIBEV_FLAGS will override the flags completely if it is found in the environment. This is useful to try out specific backends to test their performance, or to work around bugs.
Instead of calling Loop.reset() manually after a fork, you can also make libev check for a fork in each iteration by enabling this flag. This works by calling getpid() on every iteration of the loop, and thus this might slow down your event loop if you do a lot of loop iterations and little real work, but is usually not noticeable. The big advantage of this flag is that you can forget about fork (and forget about forgetting to tell libev about forking) when you use it. This flag setting cannot be overridden or specified in the LIBEV_FLAGS environment variable.
When this flag is specified, then libev will attempt to use the signalfd API for the Signal (and Child) watchers. This API delivers signals synchronously, which makes it both faster and might make it possible to get the queued signal data. It can also simplify signal handling with threads, as long as you properly block signals in your threads that are not interested in handling them. signalfd will not be used by default as this changes your signal mask.
When this flag is specified, then libev will avoid to modify the signal mask. Specifically, this means you have to make sure signals are unblocked when you want to receive them. This behaviour is useful when you want to do your own signal handling, or want to handle signals only in specific threads and want to avoid libev unblocking the signals. It’s also required by POSIX in a threaded program, as libev calls sigprocmask(), whose behaviour is officially unspecified. This flag’s behaviour will become the default in future versions of libev.
Availability: POSIX
The standard select backend. Not completely standard, as libev tries to roll its own fd_set with no limits on the number of fds, but if that fails, expect a fairly low limit on the number of fds when using this backend. It doesn’t scale too well (O(highest_fd)), but is usually the fastest backend for a low number of (low-numbered) fds.
To get good performance out of this backend you need a high amount of parallelism (most of the file descriptors should be busy). If you are writing a server, you should accept() in a loop to accept as many connections as possible during one iteration. You might also want to have a look at Loop.io_interval to increase the amount of readiness notifications you get per iteration.
This backend maps EV_READ to the readfds set and EV_WRITE to the writefds set.
Availability: POSIX
The poll backend. It’s more complicated than select, but handles sparse fds better and has no artificial limit on the number of fds you can use (except it will slow down considerably with a lot of inactive fds). It scales similarly to select, i.e. O(total_fds). See EVBACKEND_SELECT, above, for performance tips.
This backend maps EV_READ to POLLIN | POLLERR | POLLHUP, and EV_WRITE to POLLOUT | POLLERR | POLLHUP.
Availability: Linux
Use the linux-specific epoll interface. For few fds, this backend is a little bit slower than poll and select, but it scales phenomenally better. While poll and select usually scale like O(total_fds) where total_fds is the total number of fds (or the highest fd), epoll scales either O(1) or O(active_fds).
While stopping, setting and starting an I/O watcher in the same iteration will result in some caching, there is still a system call per such incident, so its best to avoid that. Also, dup()‘ed file descriptors might not work very well if you register events for both file descriptors.
Best performance from this backend is achieved by not unregistering all watchers for a file descriptor until it has been closed, if possible, i.e. keep at least one watcher active per fd at all times. Stopping and starting a watcher (without re-setting it) also usually doesn’t cause extra overhead. A fork can both result in spurious notifications as well as in libev having to destroy and recreate the epoll object, which can take considerable time and thus should be avoided. All this means that, in practice, select can be as fast or faster than epoll for maybe up to a hundred file descriptors, depending on the usage.
While nominally embeddable in other event loops, this feature is broken in all kernel versions tested so far.
This backend maps EV_READ and EV_WRITE the same way EVBACKEND_POLL does.
Availability: most BSD clones
Due to a number of bugs and inconsistencies between BSDs implementations, kqueue is not being “auto-detected” unless you explicitly specify it in the flags or libev was compiled on a known-to-be-good (-enough) system like NetBSD. It scales the same way the epoll backend does.
While stopping, setting and starting an I/O watcher does never cause an extra system call as with EVBACKEND_EPOLL, it still adds up to two event changes per incident. Support for fork() is bad (but sane) and it drops fds silently in similarly hard-to-detect cases.
This backend usually performs well under most conditions.
You still can embed kqueue into a normal poll or select backend and use it only for sockets (after having made sure that sockets work with kqueue on the target platform). See Embed watchers for more info.
This backend maps EV_READ into an EVFILT_READ kevent with NOTE_EOF, and EV_WRITE into an EVFILT_WRITE kevent with NOTE_EOF.
Availability: Solaris 8
This is not implemented yet (and might never be). According to reports, /dev/poll only supports sockets and is not embeddable, which would limit the usefulness of this backend immensely.
Availability: Solaris 10
This uses the Solaris 10 event port mechanism. It’s slow, but it scales very well (O(active_fds)).
While this backend scales well, it requires one system call per active file descriptor per loop iteration. For small and medium numbers of file descriptors a “slow” EVBACKEND_SELECT or EVBACKEND_POLL backend might perform better.
On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable.
This backend maps EV_READ and EV_WRITE the same way EVBACKEND_POLL does.
Try all backends (even potentially broken ones that wouldn’t be tried with EVFLAG_AUTO). Since this is a mask, you can do stuff such as:
pyev.EVBACKEND_ALL & ~pyev.EVBACKEND_KQUEUE
It is definitely not recommended to use this flag, use whatever recommended_backends() returns, or simply do not specify a backend at all.
Not a backend at all, but a mask to select all backend bits from a flags value, in case you want to mask out any backends from a flags value (e.g. when modifying the LIBEV_FLAGS environment variable).
If flags is omitted or specified as 0, it will keep handling events until either no event watchers are active anymore or Loop.stop() was called.
A flags value of EVRUN_NOWAIT will look for new events, will handle those events and any already outstanding ones, but will not wait and block your process in case there are no events and will return after one iteration of the loop. This is sometimes useful to poll and handle new events while doing lengthy calculations, to keep the program responsive.
A flags value of EVRUN_ONCE will look for new events (waiting if necessary) and will handle those and any already outstanding ones. It will block your process until at least one new event arrives (which could be an event internal to libev itself, so there is no guarantee that a user-registered callback will be called), and will return after one iteration of the loop. This is useful if you are waiting for some external event in conjunction with something not expressible using other libev watchers. However, a pair of Prepare/Check watchers is usually a better approach for this kind of thing.
Note
An explicit Loop.stop() is usually better than relying on all watchers being stopped when deciding if a program has finished (especially in interactive programs).
If how is omitted or specified as EVBREAK_ONE it will make the innermost Loop.start() call return.
A how value of EVBREAK_ALL will make all nested Loop.start() calls return.