By AI Trends Staff
Coverage following AI World Government held in Washington, DC from June 24-26 was extensive. Here is a summary of a selection of the reports:
How AI Investments Can Improve Health Care
From FederalTimes: Maintaining healthy systems — both technological and biological — requires clean, clear data. During a June 25 panel discussion at the AI World Government Conference in Washington, D.C., experts discussed how artificial intelligence and data sharing could fundamentally change how people treat and prevent disease.
“[It’s] a way of thinking about how health information flows across the country and the first step is getting it in digital format,” Paula Braun, entrepreneur-in-residence at the Centers for Disease Control and Prevention, said.
Using AI to collect and analyze mortality data, Braun was able to create coherent information that made systemwide improvements possible.
Data and AI is also being used to track the effects of policy changes in Medicare and Medicaid and how those changes impact program spending, program utilization and the health of beneficiaries, said Alison Oelschlaeger, director and chief data officer, Centers for Medicare and Medicaid Services. Oelschlaeger also tracks how CMS is sharing data to researchers, providers, and the patients themselves.
As certain medical services moved from hospitals to community-based care, data had to be collected and shared with public officials on where electricity would be essential to deliver medical care in the case of an emergency, said Kristen Finne, director, HHS emPOWER program and senior analysis, Office of Emergency Management & Medical Operations, ASPR.
“We realized how we can actually take federal health data, using machine learning and data analytics, and bring it to our public health authorities,” Finne said.
Read the source article in FederalTimes.
A New Problem for Artificial Intelligence: Data is Not Sexy
From c4isrnet: Almost a year after the Department of Defense created a hub for artificial intelligence, the Joint Artificial Intelligence Center, the agency is emphasizing the need for the Pentagon to collaborate with industry to be successful.
“This line of effort comes from a place of humility,” said Mark Beall, chief of strategic engagement and policy at the Pentagon’s JAIC. Beall spoke at the AI World Government Conference in Washington June 26. “AI is not necessarily of core prominence in our organization, so, as a result, we need to begin to look externally to industry to teach us how to use this technology and use it responsibly.”
Pentagon leaders created the JAIC in June 2018 to assist with the development of AI capabilities to use across the department. However, Beall said Defense Department officials may have underestimated their starting point.
“Some people may say, ‘well, why did the DoD start with AI strategy?’” Beall said. “Why didn’t it start with a data strategy? I think the answer to me is it’s very clear that data is not sexy. It’s simple and not sexy, so how do we create a burning platform under which we can rally the department to undergo the types of changes that it needs to go in the 21st century? So we decided to have an AI strategy.”
Now, Beall said the department needs to treat data as a strategic asset, which he describes as a cultural change in the department.
“In this rush to adopt AI, obviously as you guys know, it’s not just AI,” Beall said. “There’s this cultural change that has to happen and I think that kind of revolves in the first instance around the way we treat our data.”
To help with that, leaders at the JAIC have learned it is necessary for the JAIC to work with industry partners and international allies.
“Reaching out to industry the entire time and asking them what do you suggest,” Roberts said. “What do you propose. You give us an example of a work statement in a way that you think this is going to work.”
Gauging the Right Amount of Government Regulation of AI
From SearchEnterpriseAI: As organizations begin moving AI technologies out of research and testing and into deployment, consumers, technologists, policymakers and businesses likewise have started to understand just how much AI is changing the world and that some kind of government regulation of AI is necessary.
Like any powerful force, AI requires rules and regulations for its development and use to prevent unnecessary harm, according to many in the scientific community. Just how much regulation, especially government regulation of AI, is still open to much debate.
Most AI experts and policymakers agree that at least a simple framework of regulatory policies is needed soon as computing power increases steadily, AI and data science startups pop-up almost daily, and the amount of data organizations collect on people grows exponentially.
“We’re dealing with something that has great possibilities, as well as serious [implications],” said Michael Dukakis, former governor of Massachusetts, during a panel discussion at the AI World Government conference here.
Many national governments have already put in place guidelines, if sometimes vague, about how data should and shouldn’t be used and collected.
Some regulatory rules also govern how AI should be explainable. Many AI algorithms run in a black box; their inner-workings are considered proprietary technology and are sealed off from the public. The U.S. recently updated its guidelines on data and AI, and Europe recently marked the one-year anniversary of its GDPR.
Many private organizations have moved to set internal guidelines and regulations for AI, and have made such rules public in the hope that other companies will adopt or adapt them. The sheer number of different guidelines that various private groups have established indicates the wide array of different viewpoints about private and government regulation of AI.
“Government has to be involved,” Dukakis said, advocating for government regulation of AI.
“The United States has to play a major, constructive role in bringing the international community together,” he said. Countries around the world must come together for meaningful debates and discussions, eventually leading to potential international government regulation of AI, he said.
Bob Gourley, CTO and co-founder of consulting firm OODA LLC, agreed that governments should be involved but said their power and scope should be limited.
“Let’s move faster with the technology. Let’s be ready for job displacement. It’s a real concern, but not an instantaneous concern,” Gourley said during the panel discussion.
Regulations, he argued, would slow technological growth, although he noted AI should not be deployed without being adequately tested and without adhering to a security framework.
Before AI Blooms, DHS Intel Needs Help on Cloud, Data, and Acquisition
From MeriTalk: The Department of Homeland Security (DHS) is looking for technical and acquisitional aid from partners in refining its database to implement artificial intelligence (AI) solutions, DHS Intelligence and Analysis (I&A) Office CIO David Bottom said on June 25 at the AI World Government conference.
Bottom said that technologically, DHS is struggling with standardizing the platform to run AI, particularly as it moves to the cloud. He called the cloud the engine behind DHS’s AI capabilities, and the fuel is the agency’s database, which he added is the more difficult technological element to refine in adopting AI and where DHS is seeking aid.
“The challenge going forward is refining the process report for our data,” Bottom said. “We still don’t have it. And I recognize that there are many different types of data out there, various quality levels and provenance and usability for AI purposes. It’s a problem that we need to tackle.”
The second obstacle in developing AI in DHS, Bottom added, is in the department’s struggle to acquire quality metadata. Mapping the cost benefits of cloud acquisition is doable, while doing so with data acquisition efforts taken into account, is much more difficult.
“Getting to providence-quality metadata, making it usable for AI and models, is that [not] a harder challenge than transitioning systems to the cloud?” Bottom said. “We pretty much know how to do that. We haven’t figured out the second part. And we haven’t figured out the model behind that because it’s not necessarily cheaper. Working to improve a dataset, working with the components and DHS to make the dataset easier to use, useful and discoverable — is not necessarily something that’s going to get you a lower operating cost.”
Bottom stressed that despite the challenges DHS faces in adopting AI, it has acknowledged the benefits of the technology in the department’s operations. He said that it has the ability to enhance DHS’s counter-terrorism, cybersecurity, and counter-intelligence operations, and DHS has mapped out how it would use AI to identify and dismantle transnational organized crime networks.
Industry Craves Deeper Trust From Intel Agencies for AI Success
From fedscoop: When working with the private sector, simply providing predictable data for artificial intelligence and machine learning models won’t cut it anymore.
To fully develop the use of these technologies, the intelligence community needs to work with the private sector more closely, Justin Fier, director of cyber intelligence and analytics at Darktrace, said during a panel at the AI World Government conference Monday.
“You can’t test [machine learning]-based solutions the way you tested old software,” Fier said. “I need real user data.”
Fier cited the difficulty in having that level of trust between secretive government agencies and the private sector but added that the two sectors need to work closely together to achieve common goals.
The technology needs to come from the private sector first, Teresa Smetzer, chief executive officer of Smetzer Associates and former director of Digital Futures at the CIA, said in a separate panel Monday.
“We have made that mistake many times over,” she said about not looking to private sector innovation.
Disinformation and misinformation campaigns are the exact type of threats AI can help create and also combat. Campaigns like Russia’s in the 2016 U.S. presidential election amount to “a war of cognition,” said Brett Horvath, president of Guardians.ai.
The fear for the intelligence community is that no one part of the government is big enough, fast enough and has the money to fully develop and implement artificial intelligence to combat threats like disinformation, Smetzer said. Collaboration with private companies is crucial.
But a change must come from within as well. The IC and defense industry has long used the same acquisition models for purchasing machinery, a model that won’t work for emerging technology. There needs to be an increased level of technical acumen inside agencies working with the technologies, said Todd Myers, automation lead at the National Geospatial-Intelligence Agency.
“There will always be this rubber band pulling you back,” Myers said about the outdated acquisition process. “There has got to be a complete paradigm shift.”
The U.S. Air Force Looks to Leverage AI
From Trajectory: We are in the midst of a digital era, where technology is advancing faster than ever before and artificial intelligence is no longer a futuristic concept. Today, AI has a place in every aspect of our lives, from healthcare to education to military training.
At the AI World Government Conference in D.C. June 26, Capt. Michael Kanaan, the service’s co-chair for AI, spoke about how the Air Force has been on a journey to leverage AI for nearly 3 years.
“We had to find a way to get us to a place where we could talk about AI in a pragmatic, principled, meaningful way,” Kanaan said, according to C4ISRNet.
One avenue through which the service is exploring the technology is simulation. Earlier this month, the Air Force Life Cycle Management Center’s Simulator Program Office announced it will host a contest for small businesses to apply AI for the purpose of improving simulators, reported Defense News.
The Air Force released a list of proposal focus areas, including: cloud-based simulators; AI aided instruction in simulator; visual acuity and fidelity of objects at long ranges within the simulator environment; interoperability among networked simulators; simulator interoperability considering releasability of capabilities; and more.
“We are not asking small business to go out and invent something new,” Margaret Merkle, program manager in the Simulators Division, said via the Air Force news release. “We are asking if they have technology that we can leverage for the Air Force. Ultimately the idea is to connect with industry which will help us move into the latest technological space faster.”
The proposal process will take place in two phases. Small businesses have until July 1 to submit their proposals for Phase I Small Business Innovative Research (SBIR) awards. Following will be a two-week evaluation period and one-week contracting sprint.
Companies selected during Phase I will have the opportunity to submit proposals in the September/October timeframe for a Phase II award that will build upon what was learned and demonstrated during their Phase 1 efforts. Select companies will also be invited to present Dec. 4 at a Simulators Pitch Day in Orlando, Fla.
VA Signals Focus on User-Centric Analytics
From GovernmentCIO: In a move toward overcoming challenges around providing care best tailored to veterans’ needs, the Department of Veterans Affairs is turning to big data analytics to improve the agency’s core services, top agency leaders detailed at the recent AIWorld Government conference in Washington, D.C.
The agency has made significant advances in its human-centered design policy through leveraging both veterans data and personal feedback to instate agency-wide reforms, explained conference speakers Chief of Staff Lee Becker and Director of Enterprise Measurements Anil Tilbe of the Veterans Experience Office.
This newfound approach helped inform the Journeys of Veterans map, a comprehensive outline of how veterans are introduced to and subsequently navigate the VA and its myriad services. Rather than an agency-centric perspective that views the VA as a siloed bureaucracy, the Journeys of Veterans approach considers how veterans with particular needs are guided through intake and care — evaluating the process holistically from referral to ongoing treatment. Lee Becker summarized this philosophy as “wanting to make sure we’re solving problems for the right reasons.”
Becker highlighted the need to tailor the VA’s services to best help veterans reintegrate into civilian life, particularly through providing effective post-traumatic stress disorder (PTSD) counseling and helping former servicemen avoid substance abuse and homelessness. As both speakers noted, veteran services that are difficult to access, or seem daunting to apply for, risk turning away applicants who have the strongest need for VA care.
Becker also emphasized that VA reforms are an ongoing process informed by veteran feedback and that the agency will continue evolving to best meet the needs of returning soldiers. This has involved combining big data analysis with customer experience review to appraise both the initial application process as well as veterans’ interactions with the myriad forms of VA care. The agency’s website has also undergone extensive redesign based on the evaluation of user data, with VA analysts noting which pages are most commonly sought by visitors while ensuring key services are easy to view and access.
The ultimate goal appears to center on building a dynamic optimization framework that is especially responsive to user experience. Recent innovations in VA care have been supported in part by integrating best practices from the private sector, noting how corporations renowned for their quality of customer service have shaped their business models to build and sustain positive customer relations. This has run in tandem with integrating new developments in big data processing, particularly the coalescing of separately compiled data sets.
Paradoxically, this collection and processing of big data appears to be making the VA’s care process more fundamentally human. The newly launched Veterans Signals initiative has focused on collecting and analyzing survey data from veterans who have interacted with various stages of the intake-to-care process, breaking down analysis depending on which specific benefits or treatment are being sought. Insights from this collection of user surveys are then used to make reforms throughout the agency as a whole, ensuring the concerns of veterans are being duly addressed across all VA services.
[Ed. Note: Watch for continued coverage from the AIWorld Government event in AI Trends.]