Worldwide neuroscience research conducted under Obama's BRAIN project, as well as similar research sponsored by the European Union exceeds $1 billion combined. The goal is nothing short of decoding the human brain. While there are many embedded initiatives associated with this type of research, the production of artificial intelligence that can rival or even surpass humans is at the forefront.
One recent development aims to move beyond mere computational horsepower and incorporate the principles of Darwinian evolution in order to naturalize the process of robot evolution.
The genetic rewrite predicted long ago by H.G. Wells
Jay Dyer
It is not by accident or organic, “grassroots” trend that numerous films are coming focused on artificial intelligence and the transhumanist takeover. From H.G. Wells’ tales of genetic chimeras in The Island of Dr. Moreau to Sixth Day with Schwarzenegger, to coming A.I. films like Chappie, the predictive programming preparations are rolling out. My recent research has focused on the Manhattan, and like the MK ULTRA programs, Manhattan had a much wider application than is commonly known.
In fact, MK ULTRA and Manhattan are related through the connect of biometrics and bio-warfare. As MK ULTRA faded away, the program was renamed MK SEARCH and transferred to Fort Detrick, one of the U.S. Military’s biological weapons-focused bases. And with both MK SEARCH and Manhattan, we find an overarching ideology of transhumanism that has its origins much earlier in the alchemists of the ancient world.
The Manhattan Project was publicly known as the secret operation spanning several years devoted to developing the atomic bomb, yet the truth is much deeper and darker. The Manhattan Project was actually a vast program concerned with radiation, human exposure and the grand telos – engineering resistant, synthetic humanoids. The inklings that we can gather about this overall, long-term project appear to be geared towards biologically engineering humans to withstand the coming onslaught of various alterations in the entire biosphere.
A new system called Robo Brain is being funded by the usual suspects in the military-industrial-surveillance complex. Nicholas West
The initiative to merge robotics with artificial intelligence continues to expand its vision. I recently wrote about an internal cloud network program which enables robots to do their own research, communicate with one another, and collectively increase their intelligence in a full simulation of human interaction. It has been dubbed "Wikipedia for Robots."
A parallel project in Germany went further by seeking to translate the open Internet into a suitable robot language that would prompt accelerated, autonomous machine learning.
Now researchers at Cornell are presenting Robo Brain – "a large-scale computational system that learns from publicly available Internet resources." Evidently it is learning quickly:
Ok Google I have an addendum to your unofficial motto: “Don’t Be Evil…And Don’t Create Skynet”. The Silicon Valley giant has made its machine learning software available via ‘the Cloud‘ – that is, the Internet. (Or at least the distributed network of millions of servers that forms its backbone). The Google Prediction API will allow third party developers to access the machine learning capabilities via other programs, possibly enabling a new generation of smarter, better Apps and websites. This sort of artificial intelligence is narrow in the sense of what it can learn, but it is absurdly general in how it can be applied. Figure out which products your customers are likely to buy, sort incoming emails as friendly or hostile, or determine if a Facebook status update carries newsworthy information. The Google development video below discusses the possibilities of the Prediction API. Looks like cloud-based AI is going to be very useful tool.
As befits a search engine company, Google has to wade through massive amounts of data and find the kernels of really important information. They also need to tailor the search results you receive to your location and preferences. The latter may be the inspiration behind code like the Prediction API, which learns how to predict which data best fits a situation according to examples. The former is probably the basis for Big Query, the other software discussed in the Google Developers video below. Both of these programs have been used inside Google, in various forms, for years. Now they could be available to you.
If you’re lucky that is. Right now the access to Big Query and Prediction API is limited. (You can sign up for Prediction API here and Big Query here.) According to Technology Review, the number of current developers is only in the hundreds. But that’s likely to change after Google finishes its test runs for both programs.
Soon, every software developer may have access to massive data analysis and learning machine code. What will that look like? Well, you’ll have to upload data to Google Storage, then spend some time training the Prediction API to give you the kind of results you want (by feeding it the right examples). After that you’ll be able to call on the API through your own App or website. It’s basically that simple. Working with Big Query would be very similar only with less emphasis on learning and more on filtering big sets of data.
Suddenly your social networking App is better at blocking spammers, or your bank App can better guess which transactions might be the work of an identity thief. This will greatly level the playing field for new companies. Instead of needing millions of dollars to develop your own state of the art learning machine approach to a problem, you can just use Google Prediction API.
Google, in turn, benefits from a learning machine that is constantly tested and improved by large numbers of users. The Prediction API code should be able to find ways of applying the lessons it learns in one application to another. Again, this is narrow artificial intelligence – it’s not going to be able to solve any problem ever, nor will it suddenly become self aware. It should however, become really really good at performing the tasks it is taught.
And that learning code is going to be freely available over the internet, distributed and mirrored thousands of times over so it can’t be lost, growing in processing power as servers and computers are added, and increasing in sophistication over years. This a very powerful situation. As I’ve said before, narrow AI applications like Google Prediction API won’t spontaneously develop into a general (human-like) artificial intelligence but I do think they are laying the groundwork for its creation. Right now this kind of machine learning is relatively simple but if it continues to be developed without major setbacks then in a few decades it could become something much more. We could probably trust it with anything short of nuclear weapons. Singularity Hub