Loading...

Can AI and autonomous systems detect hostile intent?

Can AI and autonomous systems detect hostile intent? - Hallo friend US WORD ARMY, In the article you read this time with the title Can AI and autonomous systems detect hostile intent?, we have prepared well for this article you read and download the information therein. hopefully fill posts Items AIR FORCE, Items ARMY, Items INTELLIGENCE, Items NAVY, Items SPECIAL FORCES, we write this you can understand. Well, happy reading.

Loading...
Title : Can AI and autonomous systems detect hostile intent?
link : Can AI and autonomous systems detect hostile intent?

see also


Can AI and autonomous systems detect hostile intent?

Loading...
urban warfare (Army/Steve Reeves, Fort Jackson Leader)

Unmanned Systems

Can AI and autonomous systems detect hostile intent?

The Defense Advanced Research Projects Agency is experimenting with sensors, artificial intelligence, drones and human psychology to better protect troops with technologies that can distinguish between threats and noncombatants. The Urban Reconnaissance through Supervised Autonomy (URSA) project aims to use autonomous systems to help the military detect hostile forces in cities and positively identify combatants before any U.S. troops come in contact with them.

FCW, Defense Systems' sibling site, talked with Army Lt. Col. Philip Root, the acting deputy director for DARPA's Tactical Technology Office, to get an update on URSA and how the Defense Department plans to forge relationships between humans and machines.

This interview was edited and condensed for clarity.

So what’s the URSA pitch?

URSA, which just started Phase 1 this year with four performers, takes a different look at the vexing problem of discriminating hostile and non-hostile [individuals] in urban operations. We want to provide more awareness so when a soldier or Marine encounters an individual … they have more information about that individual's intentions as he comes into view. It may seem crazy to aspire to have that level of discrimination, but at traffic control points we do the same thing.

For example, with a van speeding toward a traffic control point at 55 mph, soldiers have 15 seconds from the time they see that van to the time it could explode. So in 15 seconds, a soldier has to identify whether that’s a van full of explosives or a van full of kids.

How do you even do that?

It’s amazing, right? So from the outside, it's the same. We have to get inside the driver’s head to understand intent. We do that by putting signs out, a stop sign. If they speed by the stop sign, that’s information. So we’re putting out a sign, a probe, to tell the target, someone we’re watching, to stop. And then we give them another sign -- send out a flare or fire warning shots depending on rules of engagement -- to insert a message. And how they respond is more information.

A van full of kids that blows through that stop sign doesn’t mean they are a target. If they blow by several, it doesn’t mean they are a target. But at some point we say, "You’ve failed a number of tests here."

You can look at URSA as finding targets, but I don’t like that view. I prefer the view of ensuring that noncombatants can get out of this scene, a van full of soccer kids turns around. Fantastic! We don’t want you around; we want to give you awareness that this isn’t a good day to be outside.

So as a military patrol is moving through a city, we’d love to let everyone know in advance. But they can’t all just leave. We have to operate with non-combatants around and provide them every opportunity to remove themselves from the environment. Anyone left would then have hostile intent.

We might send a message via drone for instance, and say today is not a good day to be outside. We recommend you go to the nearest building.

So a drone just comes down and starts talking?

Could be. We just started, so I don’t presume to know. It could come down and say, "U.S. forces are approaching. Not a good day to be outside." Anyone who stays outside might have a really good reason to be outside; it doesn’t mean they’re hostile in any way.

Could be they didn’t hear us, are deaf, it’s noisy out -- so we have to seek a different method. Maybe we put a laser on the ground to confirm they’re seeing it. Perhaps we play a popping sound and combatants and non-combatants respond differently. But at no point is the autonomy doing this on its own.

We just want to collect as much information so if someone with non-hostile intent wanders into a U.S. patrol, we can provide a folder of information before a soldier takes their finger out of the trigger well. Nobody wants to be in the situation where a soldier and a non-combatant come in contact and both are surprised.

There’s such a personal and emotional component to this. Do you have a suite of people working on this -- psychologists, behaviorists?

We have a team of behavioral psychologists and social science models of how people respond. But, unfortunately, there’s not a whole lot of data into these types of drone interactions. Nobody’s tried this. We’re going to watch social science develop at the same time as the AI and machine learning. And I’m not convinced that it’s going to work. But I’m convinced someone should be trying so we can take these lessons learned and apply it to whatever comes next.

We have to be committed to this problem. We can’t shirk away from it, because the outcome is far more perilous with the current problem -- where soldiers and non-combatants are put in harm’s way.

One lesson that we’ve learned is that under a real interrogation, suspects that are angry are often those that are innocent because they’re so mad they’re caught up in this. To your point, if someone is having a bad day and a drone gets in their face, they might throw a rock at it. We have to understand and factor that in. It might mean that we’re terrorizing the population. We could absolutely make the situation worse; we’re very sensitive about it.

Have you started designing it? I’m thinking that would also have an impact on how people interact with the technology.

If you’ve ever had a drone buzzing in your face, I’m not sure there’s a way to react other than with anger. All of that’s real. A ground robot that crawls up to you, into your personal space is not going to elicit a positive response. But at some point, that is appropriate to let people know we’re serious. There’s a spectrum of probing and solicitations, but we have to start with some suspicion and shouldn’t terrorize people who are not suspicious.

We’ve started the legal, moral, ethical considerations before we even awarded the contracts so we could get ahead of this. A panel of lawyers, ethicists, philosophers and academics meets quarterly to provide written technical guidance.

Harm has many forms. Clearly, warning shots have a greater potential for harm than just a message. Hopefully, warning shots are never necessary, but we have to understand this spectrum of the possibility of harm. It’s never been done -- only theorized. We have to reduce that theory to practice. Because the performers just want a number. When we design AI, it’s just math.

Is it going to be right? Absolutely not, but it’ll be better than the nothing we have now.

This article was first posted to FCW, Defense Systems' sibling site.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at lwilliams@fcw.com, or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.


Let's block ads! (Why?)



from All Articles and Blogs http://bit.ly/2GzKJkF
via Defens News


thus Article Can AI and autonomous systems detect hostile intent?

that is all articles Can AI and autonomous systems detect hostile intent? This time, hopefully can provide benefits to all of you. Okay, see you in another article posting.

You now read the article Can AI and autonomous systems detect hostile intent?with the link address https://uswordarmy.blogspot.com/2019/04/can-ai-and-autonomous-systems-detect.html
Loading...

Subscribe to receive free email updates:

0 Response to "Can AI and autonomous systems detect hostile intent?"

Post a Comment