I am not sure what is more surprising, that you can get 12WPM by simply using your brain, or that you can train monkeys to transcribe the New York Times or hamlet.
They're not transcribing it, so much as selecting the keys one would need to press in order to type the text. That is, the correct key lights up, and they move the cursor and mentally "click" to select it. This serves two purposes:
- WPM is a difficult metric to "game", since if they make a mistake they have to select the backspace key, and then select the correct key again.
- It's a proof of concept demonstration that a human (e.g. in the BrainGate[1] clinical trial) could use this prosthetic system to communicate in text. When a human were driving the system, the keys wouldn't light up; they'd just type freely.
Ah, I was also tricked by "transcribe"... This is why reading science articles is so much better when there is also a comment section (HN/Reddit etc). So many articles don't actually point out that is going on really.
I've pretty much stopped reading articles entirely. I'd rather click through to the study, read the abstract, then skim through looking for anything interesting or obviously wrong. Combine that w/a high quality comment site as you mention, and it's a much more informative process, and takes about the same amount of time.
Presuming that organic computers are special in some way that would justify an inter-stellar war of conquest to acquire them, the care and feeding can't be all that expensive.
Perhaps a jar of nutrient-rich slurry, housing five or six brains each, and a monthly cleaning of the probes?
We will never be harvested (as a stand alone species) for any of our parts by an inter-stellar faring alien race -- if they can make it here via space vehicles, then they can surely create bio-computers in some lab without the deficiencies of the human body....
Unless, for some odd reason our DNA is special such that the way our brains/neurons work would be some benefit.
At that point though, they would only need to snatch up a few of us and reverse engineer our DNA/genome to do what was stated in step one....
This paper is from a colleague in my lab at Stanford - happy to answer questions.
For intuition, this is like clicking on an on-screen keyboard, so the typing speed is very high given the constraint of focusing on each key as a separate, discrete movement.
Part of me wonders though - things seem a lot less discreet in dasher... but given that input is really in one dimension, and not two, it might be able to work faster...
There are discrete versions of dasher that operate using a single digital switch or actuator, rather than an analog input. As long as you can feed in one bit of information, Dasher can work with that.
This is very cool, but we've had spellers based on the P300 signal for some time; here's an article from 5 years ago explaining how to set one up: http://openvibe.inria.fr/openvibe-p300-speller/
So it looks like the big improvement here is speed; P300 spellers are slow, like 4 characters per minute. What techniques does this new research use to improve that detection speed? The article doesn't go into much detail about how the signal detection or text entry actually works.
There is super work going on in your camp, kudos! I really appreciated your review of fluorescent imaging in NHP, which is absolutely going to be, along with advancements in implantables, a major enabling tech for prosthetics. In our lab, we've recently started down the path of light field imaging for whole-brain, single neuron recording in the larval zebrafish - supplementing existing work in developing very large, dense ephys recordings. As crazy as NESD/MiCRONS programs are, it really does feel like the field is transitioning to stitching pieces, many years in the making, together towards bona-fide clinical applications. Exciting times!
http://www.leaflabs.com/neuro-about - if you check out lotus there are some links in there around the lightfield setup, some papers and a publication that cites perhaps everything you'd need to get started.
This particular approach would require an extension to do that, likely decoding from the hand and finger related regions of motor cortex. It's a great idea but would require additions to the algorithm to decode overlapping discrete actions for chord detection.
For a bit of context, Stephen Hawking can only manage about 1 WPM last I read about him; I'm sure everyone here could appreciate a twelvefold increase in their ability to express themselves, even if the end result wasn't speedy.
Important note: they were using monkeys and didn't take advantage of the speedups that humans would use, like adaptive letter-guessing/autocomplete:
>“Also understand that we’re not using auto completion here like your smartphone does where it guesses your words for you,”
That would reduce the "distance" you have to move the cursor on average.
(Several years ago I had the idea to combine the accessibility software Dasher with the "Neural impulse actuator", which tranduces brain waves, but never went through with it, though I bought the tech.)
Well they're not using autocomplete, but the software sitting between the electrodes and the cursor is a black box as far as the article goes. If the software is trained or calibrated, what's to say that it isn't picking up some information from the structure of English.
Also depending the setup of the onscreen keyboard, it too could have error reducing aspects built in.
Also it's unclear whether the 12 wpm is actual wpm with back spacing and correction, or whether it's an 'adjusted' wpm based on the error rate.
This isn't to say that it's not an impressive result regardless. Just that I'm more skeptical of the 12 wpm.
IINM I think the limitation is in the ability to read such small signal changes. Considering all the noise (especially with topical sensors) and dealing with microvolts, it can be trivial to distinguish an eye blink where you're suddenly contracting muscles close to the brain. As opposed to distinguishing the thought pattern between thinking about a "dog" vs a "cat"
How does this compare to eyes tracking? 90% of the hyper fancy things using brain activity get pwned by simple eye tracking. Or are not usable in real life due to the equipment needed.
Even if eye tracking is better (and IIRC it is), this is still valuable for patients that don't have any other option left (like Stephen Hawking soon).
This still relies on visual feedback (you need to move a cursor on a screen). It would be better if you just had to think the letter itself, not drive a cursor over it.
Not much of a difference, actually. Once you've got your brain trained to select the letters by location you can remove the eye tracking part and just use abstract thought.
"I'm thinking of a letter... On the far left of the keyboard, middle row."
I think there's not enough bandwidth for such information density. These usually work with 2D directional commands, you're driving the cursor up/down/left/right. You're not thinking of "letter E", but look at the screen and think of the direction where the cursor has to move from its current location to drift to E.
You can use auditory feedback instead of visual feedback to drive the system. Even if you are down to say 2 words per minute that's vastly better than zero.
This is a valid point and a real challenge of brain-computer interface (BCI) technology. It is largely trying to help those who suffer from locked-in syndrome, where they really do not have any reliable motor control at all, including eye movement. If you do have the ability to reliably execute any kind of motor control, such as eye movement or muscle twitch, that can be exploited for more effective, durable interface control than current state of the art of BCI.
On the other hand, it shows the primitive state of brain computer interfaces, that they are implanting a few electrodes in a region of the brain related to the hands, in order to move a mouse cursor. We're decades away from thinking to computers. What is the state of the art in brain interfaces?
If the steady erosion of privacy is combined with a steady loss of government arresting powers (i.e. coercion through violence) and the steady decline of social conservatism (i.e. coercion through blackmail and hypocrisy), I'm surprisingly okay with that.
Once your brain scan is a torrent, you'd better hope that nobody starts the Fourth Great Revival.
You might feel safe now, but can you really know for sure that you will never step out of lockstep with the ruling party? What about all future ruling parties? What happens if an easily-offended future regime finds your archived brainscan and decides that an opinion that is OK today makes you then in their eyes like a former Klansman?
If the ruling party has no arresting powers and social morality closely correlates with what people actually do, why should I worry if I disagree with them? Does a former (or hell, even present) Klansman face any penalties other than social ostracism today? As long as government police power over individuals gets slowly eroded, it becomes less and less dangerous to hold an unpopular opinion, even if everyone knows about it. I'm fine with my actions and opinions being public. I'm fine with being unpopular. I'm not fine with being arrested, fined, or sued for amounts that don't scale with wealth and income, and I think that is where civil libertarians should focus their energies.
Then the problem is in the social norms concerning doing business with unpopular people, not with the knowledge that leads to my being unpopular. Personally, I will do business with anyone, no matter how much I dislike them, and I think it would be a good social norm.
Government powers can only go up, never down. One possible exception is when government is in total decay, but that's only temporary state before it gets replaced by other strong government (through revolution or external conquest)
That's not really true. There were significant curtailments of US government power after scandals like Watergate, the Pentagon Papers, the revelations of MK ULTRA, and all the spying that was done on journalists, civil rights leaders, etc.
Of course, now there's been a backlash in the other direction, and it will probably get a lot worse before it will get better. But history has shown that it can get better.
Also, see the history of the Stasi in East Germany and Stalin in the USSR.
Amazing maybe, but not really news. Monkeys could pilot the Mercury capsule, too. With several months of intense training, you can make them do a lot of tasks involving pattern recognition. Reliable brain-sensing, on the other hand, is news.
Since people don't seem to get the reference: "Mercury astronauts are passengers, not pilots" was a prejudice against the budding manned space program, spread by among others Chuck Yeager. It seemed to hurt their ego that even a monkey could do it (…after months of training, which included electroshock therapy).
As someone else in the comments pointed out, they didn't transcribe it by reading it. The right letter to type was lit up on the screen. (The article mentions this, but doesn't emphasize it too well.)
Connecting brains to computers in order to type ??? Or Elon's Musk "neural lace" that enables interface between AI and brains ??? Such sad, shallow visions for such powerful technologies.
Maybe instead , we'll start really asking what is the good life(for both society and the person), we'll probably get answers like(from various philoshophers and religions): life full of love, compassion, empathy, peace, trancendence, meaning, service, self-actualiztion, health, etc.
And better understanding, control and training of the brain(maybe by using fmri neurofeedback) may be a great tool in achieving all of those qualities in a safe, effective and efficient manner, and of course fully based on personal choice.
How can this expand what it means to be human ? how could it shape society ?
And instead, we're talking about typing with the brain.
And we see this kind of thinking everywhere. for example , the world is filled with tons of "social" technologies, but we can't even get video chat with eye-contact, at people's homes ?
How can this expand what it means to be human? How could it shape society?
"Typing with the brain" can not only be very useful, but potentially life-transforming for those with mobility impairments -- e.g. quadriplegics, and those fully or partially paralyzed for other reasons.
We can, and do have video chat at eye-level but most people seem not to like it.
Typing is a first step and is a great way of testing the technology and verifying that the sensors are working as expected, as it's extremely obvious how accurately they're working.