Swimming in the humongous volume of information and data (clean or dirty) is a challenge. A heightened challenge we face is that volume continues to grow and become increasingly complex. Will we miss some guiding information and useful data that we can use to our advantage? Will this overwhelming volume of data reach an unmanageable level? And how will we leverage technologies to make it manageable and useful and gain a competitive edge in a timely fashion?
Today, artificial intelligence (AI) and machine learning (ML) have become common everyday words; however, the present reality and future potential are yet to evolve (Figure 1). As a result, there has been sheer excitement about evolving intellectual and dexterous capabilities to improve our lives, businesses, and security; meanwhile, there has also been trepidation about unknown and unintended consequences.
Figure 1: AI will impact a multitude of fields, including education, business, and military.
AI is expected to intake data through ML to analyze data, create a model, and make decisions based on that data. AI is also expected to create model-based learning and modify it with new data. It should be a system that is driven by data and will offer the ability to learn and react based on a generalized strategy-for-learning by using algorithmic models. By so doing, new insights are beneficially generated without relying on rules-based computing programs.
The ability to incessantly chew through any amount of data and unlimited combinations of variables and parse data, capture knowledge, and make a deterministic or predictive model makes ML surpass human capacity. Being unconstrained by preset assumptions of statistics can also allow ML to surpass human analysts by making predictions with higher degrees of accuracy.
As a result, wherever there are too many potential combinations and too much complexity, ML can be a potent tool. And an AI system, whether it emulates human performance or replaces humans on the execution of routine or non-routine tasks, can facilitate decision-making and process automation.
Another beauty is that the machine is sleepless and works 24/7. Machines are free of time zones and independent of geographical territories in performing data collection, aggregation, algorithms, and processing power, which has enabled AI and will continue to make breakthroughs. A variety of applications have employed AI to different extents, ranging from financial services to business operations and military prowess.
In business, AI and ML can apply to every function of doing business. They are going to play an impactful role in business intelligence and analytic solutions by creating the expertise to rapidly transform learned data into action to create competitive advantages. AI will also help IoT data analyses in data preparation and discovery, predictive analytics, and geospatial information gathering. One example is to develop management processes that build the most effective teams of judgment-focused humans and prediction-focused AI agents.
Figure 2: Military AI applications in urban environments.
In military, emerging technologies will shape the next generation of war. For instance, through human-agent teams and advances in AI, soldiers will provide commanders with real-time information about the adversary, which can be gathered from a variety of different sources. Army robotics give individual soldiers the capability to control swarms of robotic systems for missions that often require large numbers of troops to accomplish. A single soldier could conduct reconnaissance over large areas with dozens of robotic systems, which would be especially important in conditions such as dense urban environments. The exceptional challenge with urban environments is that everything takes substantial manpower to overcome and control. Intelligent teaming and robotic systems can have significant impacts and tactical advantages to deliver integrated cross-domain capabilities in multi-domain battles (air, ground, marine, space, cyberspace) to win in a complex war (Figure 2). The concept could also be developed to enhance battlefield communications when networks are hampered by enemy activity or natural obstacles are encountered.
AI talents are key, yet in shortage with demands exceeding supplies. Funds abound with exponential growth during the last decade. Thus, more money will be pouring in from both private and government sectors to nurture new talents to fill the AI talent gap. Meanwhile, thousands of startups in this arena are burgeoning around the globe. Reportedly, the UK has launched new university courses focused on AI and added funding for doctoral students at top universities. The UK has set up a parliamentary select committee on AI dedicated to consider and make recommendations on the economic, ethical, and social implications of advances in AI.
Moreover, China is now embarking on an unprecedented effort to master AI. Its government is planning to invest tens of billions of dollars into AI technology in the coming years, and many Chinese companies are investing heavily in educating and developing AI talents. The Chinese government is pushing hard for the development of AI and IoT in China, as well as commercial AI companies. If this nationwide effort succeeds, China could emerge as a leading force in AI. China’s success in building supercomputers demonstrates its potential to catch up to world leaders in AI hardware.
Hardware plays a critical part in the AI era and works hand in hand with software systems. The increased workload and almost unlimited processing power propelled by AI/ML will require the most advanced semiconductors, packaging approaches, and manufacturing prowess ever developed to reach the interconnect density needed.
To enable AI and its building blocks—machine learning, deep learning, neural networks, new chips (processor and memory), and new architectures—a system design that delivers on targets such as low power consumption, high performance, low latency, high bandwidth, and high speed, will be the ever in demand. Inference processing in lieu of traditional program processing can be achieved only by fulfilling these performance requirements. Equally demanding is to assess and optimize for different types of AI workloads— a business case to justify building custom-designed chips (e.g., application-specific integrated circuits, or ASICs).
Recently, DARPA's Microsystems Technology Office (MTO) established funding that could potentially reach upwards of $1.5B over its lifetime. Dubbed the Electronics Resurgence Initiative (ERI), the fund will be used to work on advances in chip technology. The funding is a significant increase in hardware budget focusing on chip design, architecture, materials, and integration, as well as leveraging on ML to substantially speed up new chip design.
To face the challenges of the AI era, new semiconductor technology will call for materials innovations to develop a wide array of new processor integrated circuits (ICs) and memory chips. This is a demanding area in technology and a growing space in business.
Further, as global competition moves on and new technologies continue to become available, who will have the upper hand remains to be seen. Until now, the U.S. semiconductor industry has been in a leading position in AI hardware. This is an ongoing global competition among scientists, engineers, companies, and countries. There is a long way to go before reaching the full potential of AI to truly mimic human cognitive capabilities and functions (e.g., asking the right questions at the right time to solve the right problems in real time).
AI is creating a new paradigm. Ultimately, to best team up human-machine intelligence, we expect synergistic performance and capability by integrating judgment-focused humans and prediction-focused AI agents. AI should be destined to augment human cognition, capabilities, and capacities without causing ethical and social issues. That is the value and virtue of human-machine intelligent teaming!