-
- News
- Books
Featured Books
- smt007 Magazine
Latest Issues
Current IssueBox Build
One trend is to add box build and final assembly to your product offering. In this issue, we explore the opportunities and risks of adding system assembly to your service portfolio.
IPC APEX EXPO 2024 Pre-show
This month’s issue devotes its pages to a comprehensive preview of the IPC APEX EXPO 2024 event. Whether your role is technical or business, if you're new-to-the-industry or seasoned veteran, you'll find value throughout this program.
Boost Your Sales
Every part of your business can be evaluated as a process, including your sales funnel. Optimizing your selling process requires a coordinated effort between marketing and sales. In this issue, industry experts in marketing and sales offer their best advice on how to boost your sales efforts.
- Articles
- Columns
Search Console
- Links
- Events
||| MENU - smt007 Magazine
New Age Computing
December 31, 1969 |Estimated reading time: 6 minutes
Compiled by Leo O'Connor from a document researched and written by James P. Smith, Ph.D.
Far from hype, self-managed or autonomic computing is attracting serious attention.
Autonomic computing (AC) is the name chosen by IBM to describe the company's new initiative aimed at making computing more reliable and problem-free. It is a response to a growing realization that the problem today with computers is not that they need more speed or have too little memory, but that they crash all too often.
PC users want to get away from the "ctrl-alt-del" solution to problems. Business users want to float among disparate databases at ease, and they want their data to be safe and eternal. Self-healing, reliable computing always has been a goal of hardware and software suppliers. Generally, technology has surpassed the needs of most computer users now their priority is, "something that works and keeps on working."
AC Initiatives in the IT Industry Information technology (IT) researchers from leading vendors such as IBM and Microsoft Corp., and universities such as Cornell and California-Berkeley met recently at IBM's Almaden Research Center at San Jose, Calif. Their mission: Describe projects and test technologies that permit large distributed systems to configure, monitor, optimize and heal themselves with little or no human administrator intervention. They acknowledged, however, that a new set of industry standards would be required and defined for hands-off management of large, distributed, heterogeneous systems.
The Globus Project, a consortium of researchers from universities and national laboratories building technical computing grids, has begun to define the Open Grid Services Architecture (OGSA), a collection of XML definitions and other tools that would permit systems to self-manage and interoperate over the Internet. OGSA work has yet to be applied to commercial environments.
While the ultimate goal of AC is years off, vendors have released some products that make individual computing elements more self-managing. For example, Microsoft's SQL Server database product includes an "Index Tuning Wizard and Analysis" feature that performs what-if analysis using sample queries to automate database design decisions. The company is working on improving the tuning wizard by refining exactly which statistics to collect and analyze.
Similarly, IBM has made advances through its eLiza Project, intended in part to spread many automated-management features from its mainframes to other (IBM), newer server lines. The idea is to create a high-level "nervous system" that ultimately would permit enterprises to define business policies and objectives and to have entire computing environments manage themselves.
Delivering such a comprehensive level of self-management is becoming increasingly critical to IT vendors and their enterprise customers. As demand for computing grows, enterprises are being forced to build increasingly complex systems, composed of many more servers, storage devices and other elements. Enterprises are even expected to borrow the concept of computing grids networks of systems that dynamically share capacity over Internet technologies.
Management Cost ImpedimentAs computing environments become more complex, enterprises need more people and money to manage them. Industry standards and products to support the vision of AC seem to be years off, but many researchers are attacking the problem:
- A project at Columbia University, called Kinesthetics eXtreme, places Java-based agents or probes into legacy systems and ties them into a high-level set of policies and rules to allow for system self-management.
- IBM has called for industry-wide cooperation to develop AC standards and technologies. Yet, officials at Hewlett-Packard (HP) say the competitive pressure to deliver on the concept already began. While it already has shipped a new version of its Utility Data Center products, the company has delayed in making specifications publicly available. HP ultimately plans to conform to OGSA or other evolving standards, but believes it can exploit an advantage as the only enterprise IT vendor delivering on AC concepts today.
- Application server and messaging vendors are consolidating, and network vendors are looking to add more value. Looking at academia, in JCP, in the W3C and in Web Services, certain patterns emerge. The implications of the AC initiatives, i.e., rules, intelligent networking and Web services, are that they are all interrelated parts of middleware. Hence, while AC may be a long way off, other technologies are playing their part in furthering its aims.
Addressing ComplexityThe difference between artificial intelligence (AI) and AC is partly a matter of definition: "Intelligent machine" may refer to a unit that embodies human cognitive powers. In reality, the two are not the same. However, if intelligent machine is construed as a system that can adapt, learn and take over certain functions previously performed by humans, then AC does involve the idea of embedding that kind of intelligence in computing systems.
AI is a critical discipline that will help create AC. AI-related research, some involving new ways to apply control theory and control laws, can provide insight in running complex systems that acclimate to their environments. However, AC does not require the duplication of conscious human thought as an ultimate goal.
To create autonomic systems, researchers must address key challenges with varying levels of complexity. Following is a partial list:
System identity. Before a system can transact with others, it must know the extent of its own boundaries. How will our systems be designed to define and redefine themselves in dynamic environments?
Interface design. With a multitude of platforms running, system administrators face a briar patch of knobs. How will consistent interfaces and points of control be built while allowing for a heterogeneous environment?
Translating business policy into IT policy. How can human interfaces be created that remove complexity while permitting users to interact naturally with IT systems?
Systemic approach. Creating autonomic components is not enough. A further requirement is to unite a constellation of autonomic components into a federated system.
Standards. The age of proprietary solutions is over. The task now is to design and support open standards that will work.
Adaptive algorithms. New methods will be needed to equip the new systems to deal with changing environments and transactions. The goal: to create adaptive algorithms by using previous system experience to improve the rules.
What Lies Ahead?The difficulty in developing and implementing AC is daunting. At the heart of the matter is the need to assemble minds from multiple technical and scientific disciplines.
Part of the challenge lies in the fact that AC has been conceived as a "holistic" approach to computing. The difficulty is not the machines themselves: Year after year, scientists and engineers have exceeded goals for computer performance and speed. Rather, the problem now lies in creating the open standards and new technologies needed for systems to interact effectively for performance on two levels: to enact predetermined business policies more effectively, and to protect and "heal" themselves with a minimal dependence on traditional IT support. This broader systems view has many implications, i.e., on a conceptual level, the way computing systems are defined and designed will need to change via:
1. The computing paradigm, which will change from one based on computational power to one driven by data. Also, the way computing performance is measured will change from processor speed to the immediacy of the response.
2. Individual computers will become less important than more granular and dispersed computing attributes .
3. Computing economics will evolve to better reflect actual usage what IBM calls e-sourcing.
4. Based on new AC parameters, individual component functionality will change and may include scalable storage and processing power to accommodate the shifting needs of individual and multiple autonomic systems.
5. Transparency in routing and formatting data to variable devices.
6. Evolving chip development to better leverage memory.
7. Improving network monitoring functions to protect security, detect potential threats and achieve a level of decision making that allows for the redirection.
8. Smarter microprocessors that can detect errors and anticipate failures.
For more information, contact Julia Rowell at (210) 247-3870; Fax: (210) 348-1003; E-mail: jrowell@frost.com.