A recent posting on TechCrunch "Check-Ins, Geo-Fences, And The Future Of Privacy" had a good summary of the balancing act between privacy and geo-location and worth a quick read. The addition location related information is a key component to the critical addition of context required by the the Snowflake Effect principal of getting things "just right" as in just the right stuff to just the right person at just the right time on just the right device in just the right way. However there is, and likely always will be, the need to keep this location based information in context itself so that your location information is being used when you want it, with whom you want it and where it will add value. And it can't require too much explicit input or action on our part as we simply won't remember and won't take the time and trouble to do so all the time which severely reduced the value for us and others. So we need all the help we can get to help us make smart decisions and do so as automatically as possible yet all the while maintaining the various levels of control each of us will want, which in itself is a context based "it depends" type decision that is constantly changing.
And we are getting more and more help with all these decisions from many sources and each of us have an growing army of support in the form of other people and all their input as well as devices that are finally beginning to gain some "smarts" and be able to do more than what we explicitly tell them to do.
It was therefore most interesting to me to read the comment:
As apps and mobile devices become more geo-aware, a balance will need to be struck between the over-sharing creepiness of constant location broadcasting in the background and the annoyance of the constant check-in chore. On Tuesday, at our Disrupt conference, Facebook’s VP of Product Chris Cox described a future where phones are “contextually aware” so that they can “check into flights, find deals at grocery stores,” and do other things for us at that right place, at the right time. “These things take a bunch of clicks now—it’s all wasting time,” he said. “The phone should know what we want.”
Context and contextual awareness IS the next great frontier when it comes to technology advancements and the continued exponential rate the Snowflake Effect of mass personalization is increasing.
Good article in Fast Company on Creating Engaging Spaces for Engaging Ideas that served as a reminder how important environment is to us in augmenting and supporting our learning and living. I’ve come to be totally convinced that we are essentially products of our environments and that we still understand very little about these effects and seem to most often chose to ignore what we do know. I do my best to pay more attention to how environments affect me and to do a combination of creating environments that work best for me for a given function and conversely how to adapt to an environment I’m in for maximum benefit.
For example when I used to spend a lot of my time in airplanes flying around the world I found that the environment in an airplane seat, especially on longer flights, was ideal for me for reflective thinking and note taking and new idea generation. So I rarely got out my laptop, phone or tablet and instead spent a majority of my seat time with my pen & paper (usually the real thing, sometimes digital) and jotted down notes as I reflected back on recent conversations, visits, reading, presentations and the like. Environment is also one of the top reasons why I am finding my current lifestyle of living aboard my sailboat “Learnativity” and seeing the world from this aquatic perspective, to fit me so well. I made this unusual move to focus on learning as a way of being and now into my third year of this it is exceedingly successful.
So with this in mind I recommend you take some of your valuable time to consider what has recently been going on at Stanford's new d.school building designed for innovation. At the recent “trade show” like event they discovered the value of environment and noted:
Out goes the tech-focused approach of maximizing the number of iPads. In comes a human-centered approach of creating a warm living room. Using the nice furniture (on wheels) from the d.school, we created a nice cozy environment in the 8-foot by 8-foot space allotted to us.
You may have read some of my previous postings on prosthetics and how they are being dramatically affected by advancements in 3D scanning and printing. But there is so much more benefits that prosthetics are providing for many other advancements and in particular the ability to control them directly with the brain, as outlined in the article below.
Oh, and when it comes to prosthetics I’ve noticed that many assume that this only applies to military personnel and casualties of the atrocities of war, but these are actually a small minority of the need and use of prosthetics and other areas such as accidents and birth defects account for many more. Whatever the cause the future of prosthetics is looking brighter than ever and how great is it that these advancements can not only bring such transformative change to those in need but can also provide additional benefits for the rest of us as well.
This whole area of enabling direct connections between the brain and external devices from prosthetics to eyes to computer interfaces is going through some remarkable advancements and I’m convinced they will be the source of many spin off advantages for an even wider array of applications. As the article outlines, there is still some major challenges to overcome but this initiative and funding by DARPA focused on finding solutions is promising. Prosthetics and brain control are definitely worth watching for more glimpses of the premature arrival of the future and answers to my ongoing quest to have us all change our assumptions and to be planning and wondering around the question of “What if the impossible isn’t?”
As the use of prosthetic limbs increases in military veterans, the Pentagon is investigating prostheses that are more durable, reliable and directly controlled using brain implants.
DARPA, the military’s research arm, said it will launch the next phase of its decade-old Revolutionizing Prostheticsprogram, which had an original goal to create a fully-functioning, neurally-controlled human limb within five years.
My first and still favorite music recommender system, Pandora has been struggling to survive and has had several near death experiences over the past 10 years. So I’ve been delighted to see a flurry of reports in the past few weeks about how things have recently changed for the better for Pandora. See the list at the end of this post for several of these recent reports. Perhaps what I found even more valuable was the larger lessons which emerge from the Pandora story such as how to survive by being agile and adaptive and how perseverance, both on the part of both the founder and CEO Tim Wetergren and all the many employees who stayed with him through the troubled times and in many cases going years without paychecks, really does win out.
Pandora was one of the first tangible examples Erik Duval and I seized upon and used when showing others what The Snowflake Effect looked like and proof that it was possible. Pandora truly lived up to its mythical name and let the Snowflake Effect genie out of the box for good for Erik and I.
While I continue to watch and experiment with many other music recommender systems such as Last.fm, Slacker, and Spotify, I’ve always found that Pandora does the best job of helping me find just the right songs at just the right time, which is at the heart of the premise and the promise of the mass customization and personalization of The Snowflake Effect. Others prefer to find music based on the tastes of others by adding and mining data from social networks to their tools, but for me I’m looking for music that matches my tastes, moods and context and Pandora, with its Music Genome Project database containing intricate details of each song, seems to do do this best. * See “The Song Decoders” NYT article for an interesting insight into the people doing this work.
Music is but one of the almost infinite areas where we are seeing the transformation from a culture of mass production to one of mass customization and personalization. Experiencing the difference between finding songs and artists via a service such as Pandora compared to something like Top 40 radio is one of the most compelling ways of seeing this transformation which will help you see just how powerful and how real this transformation is and will be. We are already seeing similar examples of how we can have similar “decision support” in finding the best books to read, sites to visit, people to talk to.
If music recommender systems are new to you or it has been a while since you have tried them, I’d encourage you to check them out and see if they don’t help you find just the right songs, just for you, and in doing so give you a glimpse of the Snowflake Effect future before us.
Seems like today is filled with examples of science fiction becoming reality and adding to my WITII? list of “What if the Impossible Isn’t?” Below is the latest example of some significant advances in gesture based computing, Minority Report style. Be sure to follow the link to the full article for some great videos to show you how this works.
For me gesture based computing is but one of the many exciting ways we are finally closing the gap between ourselves and the “machines” we interact with. These kinds of far more natural based interfaces which also include things like touch based interfaces, voice controlled and direct mind/brain controlled interfaces.
In 2008, I attended a meeting in Madrid, Spain that featured the coolest demonstration I had ever seen. The problem was that I wasn’t allowed to talk about what I had seen because the company was still in stealth-mode. More importantly, several governments, including the U.S. government were still exploring various parts of the technology for next-generation computing systems, so parts of this were very confidential. By the end of that year, Oblong Industries had revealed itself, but still little was said about its project. Finally, people are starting to talk about it.
While we may not have been at this year’s TED conference, apparently, Oblong was. And apparently, it wowed the crowd. And it should have. If you’ve seen the movie Minority Report, you’ve seen the system they’re building.
No, really. The co-founder of Oblong, John Underkoffler, is the man who came up with the gesture-based interface used in the Steven Spielberg movie. And now he’s building it in real life.
In a former lifetime when I was championing metadata and standards around the world one of the most common questions and reasons for doubting this was the future, was that it would be impossible to ever have enough metadata. Where was it all going to come from and who was going to create it all? With most people still thinking that metadata was the same as library cards in the catalogue and that all metadata had to be entered by us humans, it was a reasonable concern. I remember my dear friend Erik Duval and I, and others, suggesting that this was not impossible at all and that much of the metadata would come from automated metadata generation. At the time we could only point to a limited number of very early examples. Clocks and GPS sensor in cameras for example could automatically add time and location to the information of each photo which also includes all the details of the camera used, exposure settings and more. But this was barely the beginning and now it seems like I’m finding more examples everyday and ones which are creating metadata at a truly breathtaking rates and scale. And not just simple metadata but very rich and deeply detailed sets of metadata are being created.
A recent example (below) prompting this post was how SimpleGeo, (a relatively small startup that hasn’t even launched!), is already indexing over one million locations PER HOUR. Listen to to SimpleGeo founder Joe Stump describe some of the metadata this is generating and just one example of how this could be used in this quote from
"Location-based devices only provide a latitude and a longitude, sometimes an altitude," he said. "What they don't provide is a ZIP Code, city, state, county, weather data, messages and photos posted near the site. They don't provide business listings, Wikipedia entries, census data (for demographics), articles written or posted near the location," all of which SimpleGeo does. For example, a location-based game set in San Francisco could accurately display its players gleaming in the California sun, or obscured by Golden Gate fog, based on the real-time weather data from around town.”
Even this is but the beginnings of the gathering of metadata snowflakes as they pick up speed rolling down the hill turning into an exponentially growing snowball and avalanche of metadata. Watch more so for the kinds of previously unimaginable benefits and capabilities which all this metadata will enable.
In a former lifetime when I was championing metadata and standards around the world one of the most common questions and reasons for doubting this was the future, was that it would be impossible to ever have enough metadata. Where was it all going to come from and who was going to create it all? With most people still thinking that metadata was the same as library cards in the catalogue and that all metadata had to be entered by us humans, it was a reasonable concern. I remember my dear friend Erik Duval and I, and others, suggesting that this was not impossible at all and that much of the metadata would come from automated metadata generation. At the time we could only point to a limited number of very early examples. Clocks and GPS sensor in cameras for example could automatically add time and location to the information of each photo which also includes all the details of the camera used, exposure settings and more. But this was barely the beginning and now it seems like I’m finding more examples everday and ones which are creating metadata at a truly breathtaking rates and scale. And not just simple metadata but very rich and deeply detailed sets of metadat are being created.
A recent example (below) prompting this post was how SimpleGeo, (a relatively small starttup that hasn’t even launched!), is already indexing over one million locations PER HOUR. Listen to to SimpleGeofounder Joe Stump describe some of the metadata this is generating and just one example of how this could be used in this quote from
"Location-based devices only provide a latitude and a longitude, sometimes an altitude," he said. "What they don't provide is a ZIP Code, city, state, county, weather data, messages and photos posted near the site. They don't provide business listings, Wikipedia entries, census data (for demographics), articles written or posted near the location," all of which SimpleGeo does. For example, a location-based game set in San Francisco could accurately display its players gleaming in the California sun, or obscured by Golden Gate fog, based on the real-time weather data from around town.”
Even this is but the beginnings of the gathering of metadata snowflakes as they pick up speed rolling down the hill turning into an exponentially growing snowball and avalanche of metadata. Watch more so for the kinds of previously unimaginable benefits and capabilities which all this metadata will enable.
Note: I am much better at starting things than finishing them and have this bad habit with starting postings that never get posted. This one has been sitting in my drafts folder for several months now and long overdue to finally get to you so here it is. The iPad announcement has since come and gone and will help fuel some of what I'm looking for but as you'll read I'm still optimistic and holding out for a future covered in "digital goo". - Wayne
Seems I’m on a bit of a print sprint of late (see previous List with a Twist posting for example), though I’m surely spending more time in front of a screen than ever. Today’s update is about the future of magazines and an update on my long standing quest and wait for the arrival of true digital paper or digital “goo”.
I LOVE Magazines!
I read a lot of magazines—more than one a day—on a very wide range of topics. And I’m one of those silly people who reads every page of the magazine, including all the ads. So as digitally inclined and geeky as I may be, let’s get this out of the way up front: I LOVE magazines.
Things I love (and want to retain) about magazines:
Focused topical areas
Relative, informative, and clever (sometimes) advertisements
Great glossy color photos, graphics
Serendipitous discovery of cool new ideas, products, etc.
Reading anywhere I’m sitting, having it in my hand/lap rather than on my desk
Just the right size/heft for handheld/lap reading
Easy on the eyes for extended reading
Cleared for take off and landing times! (reading anywhere such as planes)
No cords or batteries
However, as with any love or relationship, here are a few things I don’t like so much and would like to see improved:
Physical delivery format limits me to mail subscriptions, magazine stands, etc.
Weight. On my last flight 28 lbs/12.7kg of my bags were magazines!
Wastefulness. Apparently 15% sale of magazines in bookstands is considered high!
Static images only
Read only
Requires good external lighting
No easy search/find capabilities, especially across issues
No connections to related items of interest
No cut, copy, paste, share, keep (I used to have boxes full of torn-out magazine pages)
No disctionary or lookup
No easy way to archive them
Can I do some of this reading and resolve some of these problems on my laptop, tabletPC, smartphone, eReader? Yes, but not well, and with no where near the same reading experience of real magazines.
I Love Books Too
I don’t equate magazines at all with books, though books too are another love. Many of the items on my like/don’t-like list apply to books as well, but my Kindle has resolved most of my don’t-like list items and it has increased my book reading quantity and quality. I’ve still got some key items on my wish list for the next eBooks version such as a touch screen interface, handwritten notes, etc., but the ease of getting new books no matter where I am, access to so many (including almost all of the classics for free), linked note-taking and having my entire library with me all the time (including manuals for all my boat equipment) has been one of the best experiences in the past few years.
And for me, Kindle passes the most important test of all for book reading—I can lose myself in the story. By this I mean I am unaware that I’m reading, paper or otherwise and I'm in the story or in my thoughts as I read! Much of this is due to the very paper-like screen technology (E Ink in the case of Kindle and many other eReaders) as well as its baby bear (not too big, not too small) size and weight.
eMagazines?
So far, eReaders and tablets don’t work too well with magazines…at least not for me. I’ve tried quite a few and continue to try more of them—both software and hardware based. But with magazines, I’m looking for a different experience: color is critical, so is size of the images (larger), and yet not larger in weight and size in my hand/lap. However, I’m seeing some help on the horizon for my magazine reading and in the short term (next year), I think some of the new digital formats could provide some near-term solutions.
Check out the video below for something we’ll almost assuredly see in the next few months, and which has some promise. If you’re like me, you’ll need to look beyond the content of this example (Sports Illustrated), but I do like much of what I see in the demo (thanks to Dan Pink for the tip). Peter Kafka also has a posting on this: “Game On: Time Inc. Shows Off a Tabletized Sports Illustrated”
I am especially intrigued by the interface and the nice mix of old and new with such things as retaining all the familiarity of the old, a print- and paper-based magazine, blended with new capabilities, such as richer content with video and audio, search, rearrangement, sharing, and linking. This video is well worth 3 minutes of your time. Have a look and see what you come away with?
Note that pretty much everything you see in this demo, and what I’m most intrigued by are the changes in content and interface features. However just as important as these will be the issues of the hardware and the physical attributes of what comes next.
Here is another example of a different set of perspectives on what the future of eMagazines might look like. It’s worth watching to see what these researchers and developers assessed to be some of the essential elements of the magazine reading experience which we want to retain and what we want to avoid.
I’ve been a tabletPC champion and user for over 10 years (bought one of the first Apple Newtons too!) and still believe that tabletPC-based features, especially screen-based features such as multi-touch, “flippable”, handwriting recognition and input, and so on will become standard features on all laptops and all screens for that matter. At this point in time, tabletPCs are the best and almost only choice for any digital magazine reading I do, but the benefits I get from magazines are not there. Much of the problem is due to the physical limitations of the device, too thick, heavy, and clunky. And there is still no help with subscriptions, integrated note taking, integration of content across devices, common formats, ease of access, and very limited content beyond what is in the print versions.
It is no coincidence then that there is so much rumor and hype surrounding things like the upcoming “iPad” and rumblings from other hardware sources. It may also bode well that some of those in the publishing world are taking a more future-oriented view of these developments, such as is demonstrated by Time Inc. in the video above and by this more recent announcement ofWired's upcoming version for the iPad. Adobe is also rumored to be developing a publishing tool and magazine reader for tablet devices. And here is a recent list of "5 Things That Will Make E-Readers Better in 2010". At the very least, 2010 promises to be a significant year for more options that may help deliver some of my wish list items for magazine-type reading.
Digital Goo: The Only Book or Magazine I Ever Need?
As excited and optimistic as I am that we will see all of the above, and more over the coming year, I’m holding out for a completely different development—digital goo. This is what I’ve been asking for and evangelizing about for more than 20 years. Digital goo will be the advent of “real” digital paper, virtual paper or whatever we come to call it, and more importantly, will be a major shift in the way we think of, access, create, and consume content.
Here is an excerpt from an old paper that will give you an idea of what I have in mind for “digital goo”. I wrote this back in the 1990’s, so you’ll have to pardon the black and white and television orientation, but I think you’ll get the idea. It is as simple as it is profound, and yes, I’m still waiting.
… imagine a substance at the molecular level where each molecule is a tiny sphere where one half of its surface is black and the other half white. Each black and white sphere can easily be controlled by electrical input so that either the white or the black side is facing up. Now, mix this substance into paint or wood pulp or plastic and you suddenly have the ability to make ANY surface digital and capable of displaying ANY image you’d like with almost infinite resolution.
You can see the possibilities. Imagine binding a few pages of digital paper together to create truly digital books! Oh, and note that the images on this digital paper can move, so suddenly that you can be watching “TV (the content) in a book.” Just throw in some other colors with those black and white molecular spheres and we’ve got color screens literally EVERYWHERE! An equally frightening and exciting vision for most of us, I suspect. Based on what I’ve been privileged to see in research labs for the past few years, the introduction of this technology into the marketplace can’t be far off.
Therefore, I’m hoping that the next iteration of eBooks will consist of a few pages of real digital paper bound into the only book/magazine/paper I’ll ever need. These pages will be made by adding some “digital goo” to the regular production of paper, or paint, producing sheets of relatively ordinary looking and feeling paper which are now essentially “just” digital surfaces capable of displaying any content at near infinite resolution including video and animations. This same digital paper could also accept inputs such as touch and handwriting.
For me at least, I still want multiple pages, and I think we may see a renaissance of books as a format and form factor in this regard. Maybe it is just a function of my rather severe ADD affliction, which results in my flipping back and forth through multiple pages of a magazine that I’m reading. I bring most of my laptops to their knees because at any one time, I usually have more than 30 tabs running in my browser, and more than 10 other apps running at the same time. But I think we all have versions of this problem, and having multiple pages, both for just larger display areas (think centerfolds and flip outs) as well as speed of access, means that this will become a very common and highly desirable feature.
Research is going on to support much of this and I’m sure that like most things, it will be here sooner than we expect and also take longer than we expect for us to take advantage of.
Peter Kafka in another related posting on Condé Nast’s Offering for Apple’s Mystery Tablet: Wired Magazine finished with the following paragraph that seems just so apropos for the world of exponential change we are living in and a fitting end to this post:
“But all of this assumes that consumers, who’ve shown no inclination to pay for this stuff on the Web, will be willing to pay for it once it appears on devices no one owns yet. We’ll find out soon enough.”
In Douglas Adam’s fun look at the future in Hitchhiker’s Guide to the Galaxy, there is something he called the Babel Fish which was literally a small fish which “when inserted into the ear converted sound waves into brain waves, neatly crossing the language divide between any species you should happen to meet whilst traveling in space.”
As change continues at its exponential rate more and more of science fiction such as this becomes reality. While we are still some time away from full realtime audio translations, we continue to get closer and closer as some of the recent announcements from Google for example demonstrate. In the posting below you can see how Google Goggles enables you to take a picture of some text such as a menu item and have the image converted into text via OCR and then translated.
UPDATE: Here's a recently released video to show you all this and more:
And at the recent Mobile World Congress in Barcelona (Feb 15, 2010) Google CEO Eric Schmidt demonstrated some upcoming speech to text based translations where your can say a sentence into your phone for example and have this come out as translated text appearing on the screen. This in turn could be converted via text to speech so the person you are talking to could also hear it in their language. Real time translated conversations are likely still several years away, but examples like this show that it will likely be sooner than we expect.
However what caught more of my attention was his more general observation about the confluence of powerful devices connected to much more powerful 'cloud' servers to deliver
"things you can do that never occurred to you as being possible."
Quite right Eric and thanks for adding to the growing list of examples to my ongoing asking “What if the impossible isn’t?”
In his keynote speech today at the Mobile Web Congress in Barcelona, Spain, Google CEO Eric Schmidt showed off what could end up being a crucial tool for anyone trying to figure out a menu in a different language or a street sign in a foreign country.Google Goggles, which creates search queries based on images instead of typed-in keywords, will soon start to be able to translate from foreign languages usingGoogle Translate. It will do this using optical character recognition to first convert the images of letters into words it can understand, and then put those through Google translate
Recent Comments