Event Service

Introduction

The new production system Prodsys 2 includes a PanDA extension JEDI that adds to the PanDA server the capability to intelligently and dynamically break down tasks (as defined by the DEFT task definition component of Prodsys 2) based on optimal use of the processing resources currently available. JEDI can break down tasks not only at the job level but at the event level. The initial application of this capability is for event-level job splitting: jobs are defined in terms of the event ranges within files that they should process, and their dispatch and execution otherwise proceeds as normal. However this capability also presents the opportunity to manage processing and dispatch work at the event or event cluster level. Here follows a description of the event service being developed to realize this approach.

The event service leverages today's excellent networks for efficient remote data access, distributed federated data access (using xrootd), and highly scalable object store technologies for data storage that architecturally match the fine grained data flows, to support dynamic, flexible, distributed workflows that adapt in real time to resource availability and leverage remote data repositories with no data locality or pre­staging requirements. The approach enables cost­ effective exploitation of opportunistic computing on heterogeneous, high­ capacity and high­ value platforms such as HPCs ('hole filling'), commercial clouds (dynamic market­ based ‘spot’ pricing) and volunteer computing (no resource availability guarantees).

In the event service, PanDA/JEDI receives event requests made to the PanDA dispatcher from 'event consumers' which are pilots that request events rather than jobs from the dispatcher. In response to a pilot request the dispatcher sends an event range to be processed, in the form of a range of event IDs (the IDs are the ordered position of the event in the file) and the containing file GUID, and maintains the bookkeeping of dispatched events in the JEDI event table. The events PanDA dispatches are from a task currently activated for processing, partitioned by JEDI into jobs which will demarcate the event sets that once completed will be merged and recorded in the production system.

The event-consuming pilot and its AthenaMP payload receives the event list and GUIDs, resolves the IDs to event tokens, and establishes file replica access information in the form of a POOL file catalog (PFC), by which the event data can be retrieved. There are several possible approaches to this reference resolution, discussed below. The payload event processor (AthenaMP in the initial implementation) then retrieves events by direct I/O, processes them, and writes outputs to small event cluster files, most simply but not necessarily mapping 1:1 to the input event clusters. A monitor process in the pilot watches the directory in which the outputs are created, and as they appear, sends them to an external aggregation site, and finally reports completion of the event range to JEDI so that JEDI can update the bookkeeping. If PanDA fails to receive completion notification for the event range prior to the pilot notifying PanDA of end of job, it invalidates the consumer for the processing of that event range (so the output will be ignored if it eventually arrives) and re-allocates the range to another consumer.

When JEDI detects from its bookkeeping that a set of events corresponding to a job is completed, it triggers a PanDA job at the aggregation site which merges event cluster files into the files and datasets that are then recorded in DDM and the production system.

Motivations and benefits

Opportunistic resources

Using opportunistic resources efficiently and fully for ATLAS processing is an important means of maximizing ATLAS computing throughput within budget constraints. Examples of such resources are

  • High level trigger farm (HLT), available in steady state during LS1 and opportunistically thereafter
  • High performance computing (HPC), aka supercomputers
  • Opportunistic grid resources, e.g. non-ATLAS OSG sites
  • Cloud resources, e.g. the Amazon EC2 spot market
  • Volunteer computing, aka ATLAS@Home via BOINC

These opportunistic resources share a common characteristic -- we have to be agile in how we use them:

  • Quick start (rapid setup of software, data and workloads) when they become available
  • Quick exit when they’re about to disappear
  • Robust against their disappearing with no notice : minimal losses
  • Use them until they disappear – soak up unused cycles, filling them with fine grained workloads

Fine grained event service based processing can enable agile, efficient utilization of opportunistic resources that appear and disappear with unpredictable timing and quantity. Event consumer(s) can be injected into such a resource when it becomes available, and participate in ongoing task processing, concurrently delivering outputs to an aggregation point until the resource disappears, with negligible losses (only the most recently processed clusters) if the resource disappears too suddenly for any cleanup.

DDM, output merging simplifications

The event service utilizing remote I/O for inputs and event cluster aggregation for outputs does not involve DDM in the production workflow. Output merging is flexible and simple: merged output file size is tunable, aggregation of outputs to the merge SE proceeds progressively and concurrently with processing. PanDA knows when to trigger the merge job, which will be possible a very short time after the last events in the job are completed (most of the output data has already been transferred to the aggregation point).

Efficient multi-processing

Event level processing avoids idle cores in aAthenaMP processing. With file-level processing, a slow event in one thread can leave the others sitting idle once they have finished their jobs. If workers are issued events to process, these idle holes can be avoided.

Minimizing losses from failures

A crash doesn’t lose the job, only a few events are lost (the most recent event ranges) which will be re-dispatched by JEDI to another event consumer.

Architecture

Current overall architecture

diagram

Event service support in PanDA/JEDI

Event level processing

The event service takes advantage of JEDI's (Oracle) event table for managing and bookkeeping of event level processing. It has a simple structure optimized for fast update and select. Event level bookkeeping is of course more demanding than the job level bookkeeping that PanDA has used to date. The scale of the bookkeeping is somewhat ameliorated by working with event ranges, although (as for simulation) the ranges actually used may be single events. Scalability tests have been performed which show adequate performance for simulated event server activity at a realistic scale.

JEDI generates jobs to use the event service if the task is configured accordingly in its task parameters.

A schematic view of event level processing in PanDA/JEDI is as follows:

diagram

The event table schema is as follows. See PandaJEDI for the latest information.

diagram

PanDA's interactions with the pilot in event processing mode make use of two API calls

  • getEventRanges(pandaID)
    • Returns a list of {eventRangeID, file name, GUID, startEvent, lastEvent} where the GUID/(logical) file name is the location of the event data
  • updateEventRange(eventRangeID,status)
    • Updates status of an event range to indicate completion or failure
    • Could add a new method to update a list of event ranges if the number of updateEventRange requests is problematic

The pilot's runEvent module makes use of this API after it is triggered to do event level processing by the getJob response. The flow of PanDA-pilot interactions is: getJob → (getEventRanges → updateEventRange x M) x N → updateJob.

Event range dispatch and update must have minimal server load; these rates will be considerably higher than the dispatch rate for equivalent file-level processing, each job requiring many event list dispatches. (Not vastly greater than the job update (heartbeat) rate however.)

The initial JEDI event table and event service implementation is described in this Dec 2012 post.

All event service related functions have been implemented and installed to production PanDA/JEDI nodes, ready to be tried in the production environment:

  • Dynamic definition of event service jobs through JEDI
  • API for the pilot
  • Machinery to create reattempt or merge jobs and chain them

Event service support in the PanDA pilot

Event service processing by a pilot (the standard production pilot is used) begins with the pilot making a standard getJob request to PanDA. If the job dispatched by PanDA is an event service job, the pilot's event service processing then kicks in (the runEvent module, rather than the runJob module invoked to process conventional jobs).

In the event request/processing loop the pilot is allocated sets of events to be processed, specified as event ranges. An event range ID uniquely identifies a particular event range assignment to a particular worker, used for status reporting and bookkeeping.

The event range partitioning is such that the processing time associated with an event range is no more than about 15min, assuming the use case is one in which sudden termination of the worker is a possibility. See the discussion below on output management. Note that pilots currently communicate every 30min with the server (the heartbeat), so an event request for every ~15 minutes of processing is not a dramatically different server communication rate than at present. The event request serves as the heartbeat.

The pilot determines the physical file replica(s) to be used as the source for input event data, de-referenced from the GUID info provided by PanDA with the event range information using the usual pilot procedures querying DDM, and builds a POOL file catalog (PFC) for the use of the payload job each time a new GUID is encountered (with a unique name for each PFC).

The pilot triggers the launch and configuration of AthenaMP (or other payload), the configuration of the payload driven by the transformation and the job specification.

In the near term AthenaMP has to read an initial event at the configuration stage. In particular AtlasG4_tf/AtlasG4_trf won't start without this.

The workflow in the pilot and in its interactions with the AthenaMP payload, prior to and within the event loop, is diagrammed below.

runEvent workflow in the pilot

diagram

runEvent main event loop workflow

diagram

Event service support in AthenaMP and event I/O

Just as JEDI is an enabler for the event service scheme on the grid side, AthenaMP and associated I/O developments and optimization are enablers on the core software side, providing support for

  • Queuing and streaming events to be consumed by workers
  • I/O optimization making event reading over WAN practical
  • Asynchronous pre-fetch to remove network latency from processing work-flow

The event service is predicated on workable event streaming: effective WAN data access is essential, preferably buffered by asynchronous caching. Adequate performance for remote WAN reading of the data is critical. If performance is insufficient the scheme will be limited to working with site-local data, unless an effective pre-staging scheme (such as TTreeCache asynchronous pre-fetch) is in place (which would be desirable anyway).

The payload must initialize with access to an input file (to read a single event). The input file can be remote, so no need to download to the worker. The pilot makes available a PFC prior to payload initialization to make this possible.

The yampl message passing library (developed at LBNL) is used as the basis for the communication between pilot and AthenaMP by which the event processing workflow is coordinated. Yampl has been made available on WNs in the ATLAS software stack for this purpose.

As indicated in the workflow schematic below, the Token Extractor currently uses TAG files for token retrieval. However we are working with the EventIndex (EI) team to have the event service make use of the EI for token retrieval. With the EI, tokens can be retrieved by simply making a query to the EI web service, passing it a GUID, and receiving back an ordered list of event tokens for all the events in the file.

Event input data reads can be either against local files or against remote files via direct WAN data reads. In the remote case, xrootd/FAX will typically be used. The WAN case is the most interesting one because it eliminates the need for data locality. The event service is well suited to WAN data access because of the potential for asynchronously decoupling data reads from processing, avoiding I/O waits on the CPU for WAN latencies.

Also, the event service presents the possibility of moving only exactly the data needed over the wire, using bandwidth very efficiently. This could be achieved by having an actual event server in the system -- a web service that marshals event data needed by an event consumer client and sends it over the wire. An early prototype of such a service has been made.

AthenaMP payload workflow schematic

diagram

AthenaMP configuration for the Event Service

Positional event numbers

Starting with the following two tags AthenaMP-01-04-00 and AthenaMPTools-00-04-00, which first appeared in the ATLAS production release 20.3.3.2, all configuration parameters required for running AthenaMP in the Event Service mode are included into the following job options script:

AthenaMP/AthenaMP_EventService.py

This script can be passed to job transform using the --preInclude command-line option, for example:

Sim_tf.py ... '--preInclude' 'EVNTtoHITS:AthenaMP/AthenaMP_EventService.py' ...

The Event Service mode of the AthenaMP activated this way uses the "positional event numbers" mechanism by default (see the next section for how to switch on the "token extractor" mechanism). In order to pass additional configuration parameters to AthenaMP one can use --preExec command-line option of the transform as follows:

Sim_tf.py ...  '--preInclude' 'EVNTtoHITS:AthenaMP/AthenaMP_EventService.py' '--preExec' 'EVNTtoHITS:from AthenaMP.AthenaMPFlags import jobproperties as jps;jps.AthenaMPFlags.EventRangeChannel=\'THE_UNIQUE_NAME\'' ...

The example above uses the EventRangeChannel property for setting the name of the Yampl channel, which is used by Pilot and AthenaMP for communication.

Token Extractor

For activating the "token extractor" mechanism in ATLAS release 20.3.3.2 and newer, one can do the following:

Sim_tf.py ...  '--preInclude' 'EVNTtoHITS:AthenaMP/AthenaMP_EventService.py' '--preExec' 'EVNTtoHITS:from AthenaMP.AthenaMPFlags import jobproperties as jps;jps.AthenaMPFlags.UseTokenExtractor=True' ...

In older software releases the "token extractor" was the only mechanism supported by the Event Service and it was activated by the following (rather verbose) --preExec:

Sim_tf.py ... '--preExec' 'from AthenaMP.AthenaMPFlags import jobproperties as jps;jps.AthenaMPFlags.Strategy="TokenScatterer";from AthenaCommon.AppMgr import ServiceMgr as svcMgr;from AthenaServices.AthenaServicesConf import OutputStreamSequencerSvc;outputStreamSequencerSvc = OutputStreamSequencerSvc();outputStreamSequencerSvc.SequenceIncidentName = "NextEventRange";outputStreamSequencerSvc. IgnoreInputFileBoundary = True;svcMgr += outputStreamSequencerSvc' ...

Output management, job completion, retry

Each athenaMP worker process writes a distinct output file (an 'output server' streaming outputs from all workers to one file seems a long way off.) At a certain point the current output needs to be finalized and a new one started. In the initial implementation directed at simulation, output files contain only a single event. This is consistent with keeping the fill time for an output file short to minimize losses when an opportunistic worker goes away. The single-event approach also simplifies metadata handling. * For later implementations we will need some signal or protocol to drive when an output is closed.

A pilot monitoring thread asynchronous to the event processing monitors the output directory and spots a new output file produced by a worker. It handles output files once their modify date is old enough to be sure they are not still active. It copies the output file to a (typically) remote aggregation point, for which an object store is used. An object store is well suited because small files (objects) are handled well, and transfer mechanisms (e.g. http, xrootd) are lightweight and fast.

Object store access is of the form s3://atlas-objectstore/eventservice_buckets/event_cluster_files. The object store access path is a site parameter defined in AGIS attached with a panda queue. In the future, it can also be specified as a task parameter and is sent to the pilot as a job parameter (stored in prodDBlockToken) with format objectstore^protocol^hostname^base_path, e.g. objectstore^root^atlas-objectstore.cern.ch:12345^/atlas/xyz.

In the future, the object store path is constructed from information expressed in the file.prodDBlockToken field in the PanDA file table: file.prodDBlockToken=objectstore:protocol:hostname:bucket_name.

Upon successful output transfer, the output monitor notifies PanDA (by sending an updateEventRange message to the PanDA server) of completion of the event(s) in the file, and PanDA so records it in the bookkeeping.

Stored output files, and their reporting to PanDA, are at the event range level (ranges as initially allocated by PanDA). These ranges are the bookkeeping units of PanDA. Therefore the event range assignments from PanDA should be at a granularity suitable for the outputs (target granularity is ~15min of processing time). If assigned event ranges were at a coarser granularity than the outputs, a pre-merge step would have to take place in the pilot, adding complexity and more importantly defeating the purpose of the 15min output granularity, because multiple outputs would have to be accumulated on the worker node, which would all be lost if the worker was terminated.

If PanDA fails to receive completion notification for an event range before job end, or if an event range fails, it invalidates the consumer for the processing of that cluster (so the output will be ignored if it eventually arrives) and re-allocates the cluster to another consumer with an incremented attempt number. A failed event range can be retried within a single job at most N times, N=3 by default (configurable). Each attempt has a unique attempt number and eventRangeID. A new job is generated to retry if there are unprocessed or failed event ranges when the pilot sends the final heartbeat or the old job fails with lost heartbeat. Configurable default is 3 reattempts, again with unique eventRangeIDs.

PanDA is informed by the pilot via updateJob when the job is completed -- all allocated event ranges have been processed, as far as the pilot is concerned. At that time, if there are unprocessed/failed ranges, PanDA will keep the job active until all ranges are successfully processed. When PanDA ends the job, it triggers a PanDA job at the aggregation site which merges event cluster files into the files and datasets that are then recorded in DDM and the production system. This merge procedure is the same as is performed routinely in production. The specification of a merge job contains

Following the merge, the object store files can be deleted. (In the long run, should object stores and the event service be successful, we may want to consider keeping them and doing away with the merge; downstream processing would use the fine grained object store files as input.)

S3 access authentication

There are two scenarios for S3 access.

In the first case, credentials (key-pairs) for S3 access are stored as plain files on the panda servers. The file names are defined in AGIS, such as RAL_ObjectStoreKey.pub and RAL_ObjectStoreKey. The pilot is authorized with GSI, and can download keys for each ObjectSrore if they run with special DNs which have 'k' in the gridpref column of the atlas_pandameta.users table. The pilot uploads and downloads files to/from S3 using the keys.

In the second case, PandaProxy stores keys in memory (redis) and the pilot itself doesn't touch the keys. A secret token is given to the pilot for each job. Interactions to PandaProxy from the pilot are checked with the token (in addition to GSI if it still uses SSL). The pilot requests to PandaProxy with the token to generate pre-signed URLs in S3. The lifetime of pre-signed URLs is 30min. The pilot uploads/download files to/from those URLs without keys.

Accounting CPU time and wall time

PanDA will keep running totals of time consumed in the job DB. This is handled by an addition to an existing update statement so no new latency. The pilot will report time consumed for each event range, a new argument on updateEventRange. This is a per-worker number, and so should come from athenaMP. Vakho will think about how to provide it to the pilot. For the wall time, the event service job is running in a slot with a given core count allocation, so the wall time consumed is the amount of time the job holds the slot (pilot lifetime) times coreCount. A coreCount column will be added to job table so we're not reliant on extracting it from jobMetrics.

Error handling

Event Service is a complex distributed system and different kinds of errors can occur in various sub-components of the Event Service at various stages of the operation. In this section we are trying to identify error conditions and also to specify how the Event Service is supposed to react on them.

Event processing errors

When AthenaMP detects some errors at the event processing stage, it reports to Pilot by sending special error messages. These messages contain: error identifier, range ID (if appropriate) and a diagnostic message. Then it is up to the Pilot to decide how to react on these errors. There are two types of event processing errors in the Event Service: range specific and fatal. For range specific errors Pilot simply marks the event range as failed and informs PanDA/JEDI server about that. The fatal errors mean that the AthenaMP cannot continue, thus it either stops by itself or has to be stopped by Pilot.

  • Range specific errors
Error Acronym Description Example
Range processing error ERR_ATHENAMP_PROCESS During processing of some range AthenaMP worker either segfaulted or exited with an error ERR_ATHENAMP_PROCESS RangeID: Failed to process event range
Range parsing error ERR_ATHENAMP_PARSE Some fields are missing in the range ERR_ATHENAMP_PARSE "EventRange": Wrong format
Range parsing error ERR_ATHENAMP_PARSE Wrong values of the range fields (e.g. startEvent>lastEvent) ERR_ATHENAMP_PARSE "EventRange": Wrong values of range fields
Range value error ERR_TE_RANGE This error is detected by the Token Extractor: positional number in the given event range is wrong (e.g. it is larger than the number of events in the input file) ERR_TE_RANGE RangeID: Range contains wrong positional number 5001

  • Fatal errors
Error Acronym Description Example
Token Extractor config error ERR_TE_FATAL Wrong hostname in Event Index URL ERR_TE_FATAL RangeID: CURL curl_easy_perform() failed! Couldn't resolve host name
Token Extractor config error ERR_TE_FATAL Bad URL to the Event Index ERR_TE_FATAL RangeID: URL Error 404 ...
Bad GUID ERR_TE_FATAL Bad GUID format in event ranges (detected by Token Extractor) ERR_TE_FATAL RangeID: URL Error 500 java.lang.NumberFormatException: Invalid GUID length
Bad GUID ERR_TE_FATAL Wrong GUID in event ranges (detected by Token Extractor) ERR_TE_FATAL RangeID: No tokens for GUID XXX-XXX-XXX

Yoda: MPI-based Event Service implementation for HPCs

HPC machines are one of the most important deployment target for the Event Service applications. However, specifics of most of the HPC architectures (for example, the absence of the outbound internet connectivity from HPC compute nodes) prevents us from running the conventional Event Service applications on such machines. Thus, specifically for running on HPCs, we propose a special implementation of the Event Service - Yoda - where the entire Event Service application can be represented as a single MPI-job, which can be submitted to the HPC batch system in one go and in which the JEDI and the Pilot components communicate with each other using MPI point-to-point communication mechanisms (as opposite to HTTP-based communication in the conventional Event Service). The proposed architecture and implementation details of Yoda can be found in this first draft document, or see HpcYoda

Event Service Operation

Data prefetching on the worker nodes

This page describes the mechanism of asynchronous prefetching of input data on the Event Service worker nodes

Questions and issues

  • How to handle/upload job metadata (payload + output)
  • Monitoring
  • How to handle multi-step transformations. Does the same payload workflow still work, depositing end product outputs for handling by the output handler?
  • A token issue from Peter: The problem with output file merging/collection tends to be the fact that our current navigational infrastructure uses Tokens that contain the ROOT TTree entry number as row index. That of course is mutable when files are merged and would need to be updated. POOL/APR has a redirectional layer to do that, but it's not very beautiful and requires non-ROOT code. The event store people are working on using in-mutable event content data for indexing (e.g. run number, event number) instead, but that will require substantial developments to Athena I/O and some changes to ROOT.
  • Using run number/event number for indexing will present its own issues as well. It becomes less clear what a well defined event ordering should be, and this ordering is vital in order for event IDs and event ranges as used by the event service (which are positional IDs in the file) to be well defined in what events they map to. At present the OID in the file is the positional index, but this will no longer be the case when run/event number is used. The event I/O experts take it as a requirement than a well-defined efficient ordering be supported.
  • The event service depends on efficient WAN direct reads of event data if it is to be used in a distributed way. Event store optimization work in recent years have made this possible. It is important that the performance be checked and demonstrated. It is important also that the efficiency of WAN data access be preserved in the future as the event data model and its storage representation evolve.
  • PanDA queues -- event service uses standard PanDA queues (preferred) or its own? Some schedconfig parameters can be different for event processing, e.g. copyprefixin.
  • How will athenaMP pass to the pilot the CPU usage of workers (for accounting)

People involved

The event service is being developed as a joint project between ATLAS Software and ATLAS Distributed Computing.

  • AthenaMP, payload architecture, event I/O and event access: Vakho Tsulaia, Peter Van Gemmeren
  • PanDA server, JEDI, pilot: Tadashi Maeno, Paul Nilsson
  • Architecture, planning: Paolo Calafiura, Simone Campana, Kaushik De, Vakho Tsulaia (Coordinator - Software), Torre Wenaus (Coordinator - ADC)
  • Testing, integration, commissioning, platform porting: above and Wen Guan
  • Event Index liaison: Dario Barberis

Workplan

Initial implementation target is Geant4 simulation. It is the prime candidate for the opportunistic resources that have the most to gain from the event service approach.

  • Settle the dispatcher communication protocols for specifying a job as event type and sending an event range in response to a request from runEvent (Tadashi and Paul, done)
  • Set up POOL catalog file creation for event type jobs (Paul, done)
  • Implement token extractor and translation of event list received from pilot into token list delivered to Scatterer. (Vakho and Peter, preliminary implementation done)
  • Set up the invocation and execution of the athenaMP payload initialization step (Paul and Vakho, done)
  • Set up the pilot + athenaMP workflow in the event loop, from initialization to event loop processing to output file generation and management (Vakho and Paul, first version done)
  • Implement the communication from pilot to PanDA/JEDI to inform PanDA of completed events: status of completion and transmission to aggregation point, retry number (Tadashi and Paul, done)
  • Implement the output monitor to detect completed output files, send them to aggregation point, inform PanDA/JEDI, and clean them up. (Paul, done)
  • Implement prototype object store based output aggregation point (Paul, done, uses CERN EOS object store)
  • Implement PanDA job that can build cluster files from aggregation area into full output file for registration with DDM. (Tadashi, done)
  • Implement PanDA/JEDI service that detects event job completion (from accumulation of event completion metadata) and triggers PanDA merge job, and times out/reassigns events held by non-responding consumers. (Tadashi, done)
  • Check WAN performance of POOL event reading. Try to use TTreeCache asynchronous pre-fetch (reported by Ilija to be working now) (Wen, ...)
  • Look into how to handle IOV metadata. (Peter)
  • Establish testing setup. (Paul, Wen)
  • Establish Hammercloud based testing setup for full infrastructure. (Paul, Wen)

At time of writing (April 2014) development is ready for an end-to-end test, this is being assembled and is 'some small number of weeks away'.

Mailing list

To follow event service developments join the e-group.

Meetings and presentations

Event service meeting agendas are found here. Presentations given in other meetings are linked as attachments below.

  • Event service session, ATLAS S&C Week Feb 2014
  • Event service talk, T. Wenaus, ATLAS S&C Week Oct 2013
  • Event service talk, T. Wenaus, Sep 3 2013

Related links


Major updates:

-- TorreWenaus - April 2014
-- TorreWenaus - September 2013
-- TorreWenaus - June 2013


Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng EventServerSchematicV2.png r1 manage 683.3 K 2013-06-12 - 00:05 TorreWenaus  
Unknown file formatpptx EventService-UTAPanDA-20130903.pptx r1 manage 6405.1 K 2014-01-16 - 22:37 TorreWenaus  
PNGpng EventServiceSchematic201402.png r1 manage 430.0 K 2014-04-17 - 16:57 TorreWenaus  
PNGpng EventServiceSchematicV3-201309.png r1 manage 769.8 K 2014-01-16 - 15:21 TorreWenaus  
PNGpng EventTableSchema.png r1 manage 72.6 K 2014-04-17 - 19:29 TorreWenaus  
PNGpng PilotRunEventWorkflow.png r1 manage 187.7 K 2014-04-17 - 17:55 TorreWenaus  
PDFpdf Yoda.pdf r1 manage 189.6 K 2014-08-06 - 21:27 VakhoTsulaia  
PNGpng athenaMP_payload_workflow_schematic.png r1 manage 103.0 K 2014-04-17 - 17:46 TorreWenaus  
PNGpng eventservice-diagram-20140625.png r1 manage 763.3 K 2014-06-25 - 23:38 TorreWenaus  
Unknown file formatpptx eventservice-wenaus-scweek-201310.pptx r1 manage 3563.7 K 2014-01-16 - 22:38 TorreWenaus  
PNGpng jedi-event-server.png r1 manage 175.7 K 2013-06-12 - 10:06 TorreWenaus  
PNGpng runEventLoopWorkflow.png r1 manage 387.0 K 2014-04-17 - 18:01 TorreWenaus  
Edit | Attach | Watch | Print version | History: r28 < r27 < r26 < r25 < r24 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r28 - 2017-09-18 - WenGuan
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    PanDA All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback