A recent recruit of the US Department of Defense (DoD) Advanced Research Projects Agency (DARPA), Eric Davis – who joined earlier this year – has come up with a scheme dubbed, “the Theory of Mind.”
According to reports, it’s another DARPA shot at developing, this time algorithmic capabilities to predict, monitor, incentivize, and modify people’s future behavior.
This ambitious to say the least, and “upcoming” program, the existence of which has now – for some reason – been made public as a “special notice,” is framed as targeting adversaries and better equipping those making decisions within the US security apparatus, to either deter, or “incentivize” said adversaries.
The announcement could be there to act as a deterrent in and of itself, and there’s no doubt the US, and many other countries around the world are invested in finding ways to predict and control people.
There is one certainty and several misgivings about this program, however: the certainty is that it requires collecting massive amounts of personal data; but then, what exactly is an adversary, as far as the US (defense) establishment concerned?
And if, theoretically, an algorithm were to be capable of, as per the DARPA notice – “not only to understand an actor’s current strategy but also to find a decomposed version of the strategy into relevant basis vectors to track strategy changes under non-stationary assumptions” – who and what guarantees that such an algorithm remains contained?
“Contained” may be a strong word here, because as reports note, the US DoD dictionary of military and associated terms from 2017 is rather vague about what an adversary is.
“A party acknowledged as potentially hostile to a friendly party and against which the use of force may be envisaged” – is how it is described. However, “party” in this context is not defined at all.
“The Theory of Mind” is seen as an attempt at a new iteration of what DARPA has already been working on, such as the “Total Information Awareness (TIA)” program – announced in 2002 and slammed at the time by civil rights groups as a project that’s “the closest to a true Big Brother in the US.”
Back to Eric Davis – this machine learning (“AI”) scientist was previously employed by a company called Galois. Among the clients are DARPA, NASA, and the US Intelligence Community – while the Gates Foundation is listed as “a partner.”