I’m thinking Google Glass may epitomize “disruptive” learning (& many other things)

Google released a full explainer video of Google Glass today. And wow. Here’s my initial take on why this could be a game-changer.

Imagine you have a sophisticated but tiny computer with you all the time. Actually many of us already do – smartphones! Google simply ups the ante. Now imagine having the benefits of the computer without needing to manhandle a device. Glasses you wear that project the “monitor” on to your eyes. You talk with your computer to direct it using Google’s version of Siri.

So learning, video, connecting, communicating – it’s all right there built-into your field of vision. I think the video below should be watched to really grasp the implications. The next stage would logically be contact lenses. Why would it not be? You control how much you use it, just like a phone, and it’s massively empowering to have knowledge and collaboration and communicating tools all rolled on-demand into your vision and control without needing to touch a thing. This is the beginning of great trend.

A MUST-watch (it’s long so I made this link start right where they demo Glass so you can see it in action & instantly “get it”): https://www.youtube.com/watch?feature=player_embedded&v=JpWmGX55a40#t=192s

Holy cow! IBM’s Watson has the knowledge of a medical school student, now the size of a pizza box. Implications:

OK, so pause and just think of the following not just in terms of medicine (which is amazing!) but also other fields: education, engineering, biology, etc…

IBM’s Watson kicked jeopardy champs’ butts, & we all got a cool quote out of it about welcoming our new computer overlords. But as we move on from this memeful moment in history and stopping thinking about it all, Watson’s still there, progressing, becoming inexorably smarter. It’s staggering to watch this unfold as Watson is now rolling out to the medical field. This is a bit of a tangential extension of my previous thoughts on Apple’s Siri.

According to some, health care pros make accurate treatment decisions only 50% of the time. Watson has shown the capability of being accurate in its decisions 90% of the time, although not yet near that level with cancer diagnoses. Patients need 100% accuracy of course, but making the leap from being right half the time to being right nine out of ten times will be a huge boon for patient care. The best part is the potential for distributing the intelligence anywhere via the cloud, right at the point of care. This could be the most powerful tool we’ve seen to date for improving care and lowering everyone’s costs via standardization and reduced error.

Watson has made huge strides in its medical prowess in two short years. In May 2011 IBM had already trained Watson to have the knowledge of a second-year medical student. In March 2012 IBM struck a deal with Memorial Sloan Kettering to ingest and analyze tens of thousands of the renowned cancer center’s patient records and histories, as well as all the publicly available clinical research it can get its hard drives on. Today Watson has analyzed 605,000 pieces of medical evidence, 2 million pages of text, 25,000 training cases and had the assist of 14,700 clinician hours fine-tuning its decision accuracy. Six “instances” of Watson have already been installed in the last 12 months.

Watson doesn’t tell a doctor what to do, it provides several options with degrees of confidence for each, along with the supporting evidence it used to arrive at the optimal treatment. Doctors using an iPad can input a new bit of information in plain text, such as “my patient has blood in her phlegm,” and within half a minute Watson will come back with an entirely different drug regimen that suits the individual. IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance. That’s remarkable.

Imagine several years from now combining Watson version 7, with Siri version 9. “Siri, I’m not feeling that well can you help me…?” (yes, probably.)

Over the past two years, IBM’s researchers have shrunk Watson from the size of a master bedroom to a pizza-box-sized server that can fit in any data center. And they improved its processing speed by 240%. Where will it be by 2020?
So coming full circle where will this (or, Watson’s cousins, similar programs) take us with education? Who knows. But imagine what this could do for third world countries as they access smart phones (coming soon.) It’s inspiring to imagine. We can only stay curious, hungry to learn, always learning, staying on top of trends and technology so we can help facilitate this. Let’s be ready.

Guess what Apple’s $137b, + new job listings for Siri, equals?

Yep this math is easy. Apple has BIG ambitions for Siri – & they should! “We spend so much time with our cellphones that having an effective personal assistant could be revolutionary,” said Andrew Ng, director of Stanford University’s AI Lab.

It’s fun to speculate about how a business will spend $137b, though ultimately it is indeed speculation. But job postings for Siri developers (as noted today by Wired) give a nice heads up about where some of that cash hoard could go.

It makes sense. Apple’s competitor Google seems to be shifting strategy toward some sort of Chrome/mobile combination, to prepare for hybrid, mobile PCs. Ever get frustrated at fumbling to type or navigate on a small phone?

Think of it – a personal omnipresent (artificial intelligence) intelligent assistant – convenient AND smart. It’s not a matter of if but when. It will be a big, big market, one that Apple has early-adopted… and has the resources to split wide open, with plenty to spare for the rest of their business.

Imagine the potential for learning and getting things done, to create and to create change. And imagine that potential when smart phones get into the hands of souls in the 3rd world…