A federation is one scheme for loose coupling to help achieve a distributive, cooperative architecture.
Distributive was the adjective chosen as the D in the acronym DCF, the Distributive Computing Facility. DCF was the distributive, and distributed, architecture behind Bank of America’s early-mid 1970’s proprietary Community Office OnLine System, COOLS4I was involved as a lead architect in the specification of an online securities transaction system in 1976 [STACS]; and DCF was proposed as one of about 33 “vendors” to whom we sent our RFP. DCF was in place, and proven, and fit in a a tier of solutions that included networks of DEC (PDP 11/70s and 11/34s), Data General (Eclipse C/300s), Prime, and other minicomputers. IBM bid twin S/370-148s. CDC bid Cyber 172s, Honeywell bid 66/11s, and the technical team’s favorite was a Univac 1100/12 MP system with UTS 400 terminals; but the system never happened. (Another story, for another time.). It used distributed General Automation mini-computers to achieve non-stop (single component fail-soft) processing before Tandem came to the market. Distributed processing took place between two data centers, one in San Francisco, one in Los Angeles, to handle teller and administrative account inquires from the top (Eureka) to bottom (San Ysidro) of California; where an account based in any one area of the state could be queried from an office in the same or any other part of the state. Within data centers, processing was distributed across a series of custom designed computer quads. The hardware ran a proprietary operating system, architected by Bob Good and George Cheng, and jointly developed by Bank of America and General Automation. Real Des Rosiers, who was later a key contributor to replacing the front-end Bunker Ramo terminal system with IBM PS/2 computers as part of COIN (Community Office Information Network) in the 1980’s, was a systems programmer on DCF.
There were a number of distributed mini-computer systems at the time. Distributive computing referred to how the architecture distributed the account set to be processed across each data center’s module set, and how processing in each module set was distributed between the four computers making up each module. A federated, cooperating network of small systems, without a master controller, sharing work loads, was the backbone of the enterprise’s retail architecture. The system was later replaced with IBM mainframe computers in order to be able to expand it’s function to support the first wave of ATMs, but features of its design were carried forward into the 2000’s as part of the Bank’s Retail Systems Architecture (RSA).
I spend the time here to emphasize the point, that there is a difference between distributed and distributive architecture. Distributed, in essence, simply means that things occur in multiple locations, Distributive relates to the rules for distribution and the manner in which distributed individual elements are be joined, or treated as individuals, in a cohesive, consistent, logical manner. A fine point, but a subtly important fine point.
The World Wide Web, by nature, operates using distributed logic. Browsers, distributed all over the world, link to centralized servers in a Star network, topologically obscured by the fog of the cloud. The processing itself is distributive when discrete, distributed components can fulfill needs independently, or they can be joined in various arrangements to achieve the same consistent results. The network infrastructure itself has grown to become a distributive cloud in how Google, Amazon, Microsoft and others use farms of machines in multiple sites to share and perform tasks, using generic containers, without concern for which CPU at what specific location or machine performs each specific operational tasks. Application architecture, not so much.
Knowledge, Understanding and Awareness
Knowledge, as shown in the DIKW diagram from Wikipedia, is conceived as the bridge between information and wisdom. It is also defined as “a familiarity, awareness, or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education by perceiving, discovering, or learning. Knowledge can refer to a theoretical or practical understanding of a subject.” [Wikipedia] And, understanding is described as the process that powers the climb up the pyramid. Understanding involves comprehension. Information is derived from comprehending the significance of data, knowledge from comprehending information, and wisdom from comprehending knowledge. Awareness involves perception of situation through cognizance of events and relationships. Intelligence is “the ability to acquire and apply knowledge and skills.”5lexico.com
EATS aims to be a tool for the intelligent maintenance of a knowledge base as a form of augmented intelligence for humans who chose to employ the technology.
A major focus of EATS through v4.6 was understanding. The information model of AIR is aimed at achieving and communicating understanding of and about the elements being described. EATS v4.7 laid out a design pattern for communicating awareness, so the understanding can be federated, actionable, and dynamic. EATSv5 is aimed at bringing that design pattern to life in a form that can be understood and used by individual people, rather than institutionalized businesses6People are businesses. Businesses just aren’t people; except at the low end of the contractor, gig-worker, sole proprietor, partnership, small business level. In spite of SCOTUS decisions, corporations are not people. Institutions are not people. People are mortal. They don’t live for 200, 300, 400 or more years..
EATS through v4.6 operated, essentially, as a query-response system. Inquire about a subject, receive some degree of understanding. EATSv5 operates under a Publish/Subscribe model. Subscribe to a subject, be made aware of the changing understanding of, or about, the subject, dynamically, as new understandings are developed.
The technology to support this at scale is similar to the technology required to facilitate MMORPGs7Massively multiplayer online role-playing games. And, in a form, EATSv5 is intended as a MMORPG. What is significantly different, is one’s definition of what constitutes a “game”. Games are staged simulations presented as contests, with a variety of intentions which inform the rule set for the game, and different technologies employed to harness the interaction between players.
One successful technology that supports MMORPG level gaming is High Level Architecture (HLA). HLA is codified in IEEE Std 1516.2™-2010 and has been adopted as a core framework to EATSv5, as specified in the EATS v4.7 design documentation.
High Level Architecture (HLA)
The High Level Architecture (HLA) is a standard for distributed simulation, used when building a simulation for a larger purpose by combining (federating) several simulations. The standard was developed in the 90’s under the leadership of the US Department of Defense and was later transitioned to become an open international IEEE standard. … Today the HLA is used in a number of domains including defense and security and civilian applications.
The purpose of HLA is to enable interoperability and reuse. Key properties of HLA are:
- The ability to connect simulations running on different computers, locally or widely distributed, independent of their operating system and implementation language, into one Federation.
- Ability to specify and use information exchange data models, Federation Object Models (FOMs), for different application domains.
- Services for exchanging information using a publish-subscribe mechanism, based on the FOM, and with additional filtering options.
- Services for coordinating logical (simulation) time and time-stamped data exchange.
Management services for inspecting and adjusting the state of a Federation.
Object Model Template
The specification for the HLA includes the specification of an Object Model Template (OMT). The template includes two methods for describing objects and their interactions for purposes of shared understanding of how components relate and interact, without the necessity of knowing all details of each object involved in a distributed simulation. EATSv5 used the OMT specification, along with the other rules and specifications of the HLA architecture and its protocols, as the basis for development of a cooperative understanding of interactivity within a functional distributive, cooperative architecture.
Within EATSv5, keeping up with knowledge about the real world, and actions taken as a consequence, is the game; played in a federated virtual world. The game is played every day; whether participant players notify the system of their intentions, strategies and moves, or not. EATS will accept data feeds, which translate to information, without the specific involvement of any player. Time passes without our permission. There are no breaks, and there is no time out in the real world. Even with message and file based data feeds, time drives the real world; and awareness of time, and changing conditions and situations, drives the system’s understanding of current conditions and events; and what is, or is not, reasonable and probable in terms of expectations and probabilities concerning how the future can, or must, unfold. The OMT has been adopted as the standardized protocol for how technical information about that understanding is actualized and communicated as dynamic messages internal to EATSv5’s operation.
The current HLA specification is a two level model. A single whole federated space, described as a federate, and component entities described as simulated objects. EATS expands this into a cascaded system of arbitrary depth, using insights provided by various papers which have been written on the subject, using a hybrid Document Object Model as a superset OMT.
- AF AIR Metamodel
- AF Concerns Model
- AF Federation Model
- AF Models