Software design and architecture (Lecture notes) – Chapter 10

docx
Số trang Software design and architecture (Lecture notes) – Chapter 10 4 Cỡ tệp Software design and architecture (Lecture notes) – Chapter 10 300 KB Lượt tải Software design and architecture (Lecture notes) – Chapter 10 0 Lượt đọc Software design and architecture (Lecture notes) – Chapter 10 5
Đánh giá Software design and architecture (Lecture notes) – Chapter 10
4.4 ( 7 lượt)
Nhấn vào bên dưới để tải tài liệu
Để tải xuống xem đầy đủ hãy nhấn vào bên trên
Chủ đề liên quan

Nội dung

Lecture Notes Lecture 10 – Data flow Software Architecture An architecture style (also known as an “architecture pattern”) abstracts the common properties of a family of similar designs. Define a family of systems in terms of a pattern of its structural organization. Components Of A Style The key components of an architecture style are: • Elements/components that perform functions required by a system • Connectors that enable communication, coordination, and cooperation among elements • Constraints that define how elements can be integrated to form the system • Attributes that describe the advantages and disadvantages of the chosen structure Categories of Architectural Styles • • • • • • • Hierarchal Software Architecture – Layered Distributed Software Architecture – Client Server – SOA Data Flow Software Architecture – Pipe n Filter – Batch Sequential Event Based Software Architecture Data Centered Software Architecture – Black board – Shared Repository Interaction-Oriented Software Architectures – Model View Controller Component-Based Software Architecture DATA FLOW SOFTWARE ARCHITECTURE The data flow software architecture style views the entire software system as a series of transformations on successive sets of data, where data and operations on it are independent of each other. The software system is decomposed into data processing elements where data directs and controls the order of data computation processing. Each component in this architecture transforms its input data into corresponding output data. The connection between the subsystem components may be implemented as I/O streams, I/O files, buffers, piped streams, or other types of connections. Data can flow in a graph topology with cycles or in a linear structure without cycles, or even in a tree type structure. There are many different ways to connect the output data of a module to the input of other modules which result in a range of data flow patterns. There are two categories of execution sequences between modules: batch sequential and pipe and filter (non-sequential pipeline mode). BATCH SEQUENTIAL: The batch sequential architecture style represents a traditional data processing model that was widely used from 1950 to 1970. RPG and COBOL are two typical programming languages working on this model. In batch sequential architecture, each data transformation subsystem or module cannot start its process until its previous subsystem completes its computation. Data flow carries a batch of data as a whole from one subsystem to another. Applicable domains of batch sequential architecture: • Data are batched. • Intermediate file is a sequential access file. • Each subsystem reads related input files and writes output files. Benefits: • Simple divisions on subsystems. • Each subsystem can be a stand-alone program working on input data and producing output data. Limitations: • Implementation requires external control. • It does not provide interactive interface. • Concurrency is not supported and hence throughput remains low • High latency. PIPE AND FILTER ARCHITECTURE Pipe and filter architecture is another type of data flow architecture where the flow is driven by data. This architecture decomposes the whole system into components of data source, filters, pipes, and data sinks. The connections between components are data streams. The particular property attribute of the pipe and filter architecture is its concurrent and incremented execution. A data stream is a first-in/first-out buffer which can be a stream of bytes, characters, or even records of XML or any other type. Most operating systems and programming languages provide a data stream mechanism; thus it is also an important tool for marshaling and un-marshaling in any distributed system. Each filter is an independent data stream transformer; it reads data from its input data stream, transforms and processes it, and then writes the transformed data stream over a pipe for the next filter to process. A filter does not need to wait for batched data as a whole. As soon as the data arrives through the connected pipe, the filter can start working right away. A filter does not even know the identity of data upstream or data downstream. A filter is just working in a local incremental mode. A pipe moves a data stream from one filter to another. A pipe can carry binary or character streams. An object-type data must be serialized to be able to go over a stream. A pipe is placed between two filters; these filters can run in separate threads of the same process as Java I/O streams. There are three ways to make the data flow: • Push only (Write only) A data source may push data in a downstream. A filter may push data in a downstream. • Pull only (Read only) A data sink may pull data from an upstream. A filter may pull data from an upstream. • Pull/Push (Read/Write) A filter may pull data from an upstream and push transformed data in a downstream. There are two types of filters: active and passive. • An active filter pulls in data and pushes out the transformed data (pull/push); it works with a passive pipe that provides read/write mechanisms for pulling and pushing. The pipe and filter mechanism in Unix adopts this mode. • A passive filter lets connected pipes push data in and pull data out. It works with active pipes that pull data out from a filter and push data into the next filter. The filter must provide the read/write mechanisms in this case. Applicable Domains Of Pipe And Filter Architecture: • The system can be broken into a series of processing steps over data streams, and at each step filters consume and move data incrementally. • The data format on the data streams is simple, stable, and adaptable if necessary. • Significant work can be pipelined to gain increased performance. • Producer or consumer-related problems are being addressed. Benefits: • Concurrency: It provides high overall throughput for excessive data processing. • Reusability: Encapsulation of filters makes it easy to plug and play, and to substitute. • Modifiability: It features low coupling between filters, less impact from adding new filters, and modifying the implementation of any existing filters as long as the I/O interfaces are unchanged. • Simplicity: It offers clear division between any two filters connected by a pipe. • Flexibility: It supports both sequential and parallel execution. Limitations: • It is not suitable for dynamic interactions. • A low common denominator is required for data transmission in the ASCII formats since filters may need to handle data streams in different formats, such as record type or XML type rather than character type. • Overhead of data transformation among filters such as parsing is repeated in two consecutive filters. • It can be difficult to configure a pipe and filter system dynamically.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.