Friday, September 23rd, 2011
17

The Jig Is Up: Thought-Scanning Is Almost Here


This is mind-blowing and deeply disturbing: Scientists at the University of California, Berkeley, using functional magnetic imaging, have "reconstructed the internal 'movie' that plays in a person’s head." So, the images on the left there are of what volunteers were watching on a screen, the images on the right are simultaneous electronic pulses in their brains. Pretty close to a motion-picture scanning of thought. This presents trouble for people who, no matter what they might be seeing outside their head, are always only seeing the pentagram from the cover of Rush's 2112 album or a giant pile of pistachio nuts inside their head. Similarly troubling, scientists are also making progress on the first physiological gauge of pain. Which will eventually make it more difficult to get out of class to go to the nurse by saying your foot hurts.

17 Comments / Post A Comment

Is that Jesus?

oxla (#12,069)

always only seeing . . . a giant pile of pistachio nuts inside their head.

get.out.of.my.MIND!

riggssm (#760)

The first season of Fringe had Walter doing something like that, if I recall correctly. Pretty amazing stuff.

(In other news, 11 hours, 30 minutes until Fringe season [series] premiere!!)

oxla (#12,069)

@riggssm synchronize swatches, geeks!*

*or i guess construct a multidimensional machine(s) and uh…use that, instead.

dntsqzthchrmn (#2,893)

@riggssm Why am I so much more excited about this than I was about Torchwood: Jack Gets Bruised?

Astigmatism (#1,950)

Interesting that the scans for each person in the videos have the same sort of medium-tan skin tone. See, young people really don't "see" color!

Vera Knoop (#2,167)

@Astigmatism That was striking to me too, but not so much because of skin color– more because the faces being seen in the mind were not just fuzzed out versions of the one being presented. There was detail there that wasn't present in the external image. I know that a different part of the brain handles facial recognition; maybe, on being confronted with a new face, we "scroll" through familiar faces to sort of triangulate a resemblance?

Not convinced. I saw many of those images in the right panel when I went to school in Berkeley, and that was 20 years ago.

In fact, I still do.

jfruh (#713)

OK, if you only watched the video and didn't read the explanation, it's kind of …. weird. (I'm getting this off of another second-hand source, not the linked video, so correct me if I'm wrong.)

First what they did was show people a CRAPLOAD of YouTube videos, and recorded what sort of electrical activity those YouTube videos stimulated in the subjects' brains.

Then they showed the subject a video that wasn't from that original set, recorded the brain activity from that, sliced that brain activity into discrete moments (think of them as "frames"), tried to find moments from the earlier viewing sessions where the brain activity looked like that, figured out which video segments corresponded to those earlier moments of similar brain activity, and then put those moments together.

That explains, for instance, why in many cases a human face in the real video corresponds to what's obviously a different blurry human face in the internal video. They internal video is only as good as the "training" set of images.

hypnosifl (#9,470)

@jfruh That's almost right but not quite, the giant archive of youtube videos that they used in the reconstruction weren't actually all shown to the people before hand. Instead they just showed them a small set of movie trailers and recorded their brain activity, then the computer tried to figure out the relationship between the images on the movie trailer and what was going on in their brain, and then based on that the computer made predictions about what the brain activity might look like for the huge set of youtube videos it had in its archive. Then they showed the person a second set of movie trailers, the computer was only given information about the brain activity while watching them (no info about the second set of trailers themselves), and it compared this real brain activity to the simulated brain activity it had predicted for all the youtube videos. Based on this it picked the 100 videos where the simulated brain activity best matched what was actually going on in the person's brain when they watched the second set of movie trailers, and averaged those 100 youtube videos together to get the final result.

jfruh (#713)

@hypnosifl Thanks! That's even WEIRDER. But it is interesting in that I feel like this is a trend in this kind of science. Like, we're not trying to tease out and interpret exactly what every single electrical impulse means; we're just going to use computational brute force to correlate patterns we don't understand to results we can measure.

doubled277 (#2,783)

@jfruh or, more to the point, I think: we are using that computational brute force to correlate patterns we don't understand to results we want.

Dave Bry (#422)

Ahhh. Thanks, all, for this. So it's actually not as crazy and mind-blowing as it maybe seemed? (My understanding of for-real science rarely matches my interest in it. Simple minds are easily blown, I suppose.) But I bet soon the government will say they've mastered thought-scan imaging and just hook our brains up to some electrodes and hook the electrodes up to a screen and press play on a clip from like, "Red Dawn" or something and be like, "Ah ha! You were planning to blow up a military convey! Guilty!" And I'll be dumb enough to believe, like, "Huh, well, yes. I guess that is what I was thinking. It's right there on that YouTube screen, clear as day."

doubled277 (#2,783)

@Dave Bry Yup, sounds about right for me, too

SidAndFinancy (#4,328)

I knew you were going to post this.

Matt (#26)

Uh, how can you mention the pentagram and not mention the naked dude with the tight buns, dude?

Dave Bry (#422)

@Matt Ha ha ha. Busted again! (Shhh. See, it's working ALREADY!)

Post a Comment