Categories
Education

Google Reveals Details of Chromebook Leasing Plans. Students benefit.

Google confirmed at its I/O developer conference Wednesday that it will lease Chrome laptops to schools and businesses.

The program, called Chromebooks for Business & Education allows businesses and educational institutions to pay Google a monthly fee in exchange for a supported, updated Chromebook. Business users will pay $28 a month for each Chromebook and educational institutions will pay $20 a month per unit.

Here are the details of the subscription plan as we know them now:

Schools and businesses will order their Chromebook units directly from Google at google.com/chromebook, beginning June 15, 2011.
Google will cover phone, email and hardware replacement for Chromebook Business & Education users.
IT managers can use virtualization platforms like Citrix to offer access to non-web apps, essentially making the Chromebook a thin client on steroids.
Administrators have various management options for configuring and monitoring lots of different units.

Although the potential for education sales is vast — especially at the $240-a-year price point — we think the bigger play for Google is in the business space.

Google has actively courted small, medium and even large businesses to migrate to Google Apps for email, cloud-based document management/creation and file access. The company has successfully wooed some high-profile customers — especially in regards to email — but Google still trails enterprise giants like Microsoft, Novell, Oracle and IBM.

If the Chromebook subscription offering works, it could really make Google a big player in this space. Of course, that is a big “if.” The promise of thin clients and network computers are nearly as old as the web itself. The Chromebook may be the best implementation of Larry Ellison’s vision to date.

It’s certainly an interesting idea, that’s for sure.

Check out this video Google put together explaining the Chromebook for Business & Education:

Would you be willing to pay a monthly fee to “rent” a Chromebook?

Categories
Education

10 Trends for Global Education in 2011

The Good, the Bad, and the Ugly are still part of the incoming tide but Time will level the playing field once the dust of 20th Century marketing ploys settles and people begin to demand quality over the distracting dissatisfaction of empty, entertainment-filled promises. The masses are tired of being increasingly informed and entertained and decreasingly enabled and empowered for critical thought and deep learning.

1. Ubiquitous – The Internet has brought us a host of online education choices including everything from unscrupulous diploma mills to a myriad of so-called learning games as well as apps for nearly every subject, platform, and device. This trend will continue. The challenge to identify quality amidst the quantity will grow.

2. Social – Face it, we are social creatures and the only things we learn well in isolation are survival techniques (and even then, we wish we had some others to help us). The Internet’s social layer is solidly in place so expect to see education delivered more broadly on the social grid.

3. Mobile – The mobile generation will expect mobile access to all matters beyond mere communication and game-playing. Devices are personal links to best practices and apps will be developed to meet the ever-increasing demand.

4. Pushed – Traditional supply-side education will continue to lose ground to demand-side education where location-aware apps push just-in-time learning. Instructional design should take advantage of push technology.

5. Personalized – Learning will be personalized more and more in the way of both eportfolio content creation as well as learner-specific and contextually relevant assessment. Department of Education initiatives include plans to aggregate student achievement from cradle to grave and this data will empower apps to deliver personalized learning experiences.

6. Media rich – Whether we agree with it or not, future literacy will demand our reading of symbols that go beyond mere letters on a page. The digital landscape requires a broader skill set than previous generations learned.

7. Computer free – Web 1.0 was platform and software specific. Web 2.0 has been rather device centric. However, technology has a way of becoming invisible with wide-scale adoption. Expect the same in the education arena. The Internet of things will include more than kitchen appliances. The tools of the education trade will integrate smart technologies to seamlessly deliver interactive experiences previously relegated to traditional face-to-face settings.

8. Relevant – Thanks to gps chips, technology will afford customized delivery of learning opportunities contextually relevant to the learner.

9. Augmented – Emerging technological innovations are adding ways for learners to interact with subject matter in ways previously unavailable. Virtual field trips enable learners to transcend time and space barriers. Virtual technologies allow learner avatars to transcend identity barriers.

10. Layered – Just as the social layer has been added to the globally networked world, and just as a game layer is being constructed as I write this, watch for an education layer to be integrated where Like and Comment buttons may be accompanied by a Learn This button (Similar to Apture’s Learn More plugin but more developed).

These trends will continue while civilization continues to transition from the industrialized model of nation-state institutions to the globally networked collaborative model. Despite the fact that only half the world is Internet-worked at present, an ignorant populace can only be distracted by superficial entertainment and/or narrow cultural indoctrination for so long. Eventually, the thirst for fulfillment will drive the demand for genuine and deep learning on a global scale. Will greed give way to good?

Categories
Social Media

Brief Summary of Twitter Uses

Twitter is a web-based application interface that allows users to feed the data stream with their own generated input. Because the Twitter data stream is so large now (75 million accounts by end of 2009), there are plenty of useful ways to navigate that data.

Clearly the Twitter interface can be used to add your own input for any of the following reasons:

To simply participate in the new phenomenon of tweeting
To communicate with your followers or group
To communicate with the World-at-large
To archive your online and/or offline activities
For branding a product or service
Because it’s required by a school assignment

There are many tools for adding your own content either directly via Twitter.com’s interface, or by using any host of apps that enable multiple posting inputs such as Tweetdeck.com or Posterous.com that allow for a single post to enter the Twitter stream as well as update your Facebook or Linkedin status, your blog, your photo sharing app, etc. SocialOomph.com, Tweetlater.com, others like them, offer the ability to time-delay tweet postings as well as automate replies to new followers, etc. Apps like Grouptweet.com and Present.ly enable the formation of private groups for both input and output benefits.

These multiple input apps are attempting to simplify the needs of active Internet users. Is it a sustainable model? Time will tell but for now, it’s necessary as the crowd filters out the superfluous and drills down to the preferred mechanisms for communication.

However, an important alternative to merely adding to the data stream (input) is the myriad of ways people are using Twitter to monitor the output. There are many ways to listen to the data Twitter is streaming. Reasons to listen include:

Simply to watch the stream as it flows by
To stay informed
To monitor trends
To mine data as its own resource

Actually, new apps being developed every week enable more ways to listen to the babbling data stream. Twendz.com helps focus on crowd sentiment based on user-chosen keywords. Useful for businesses who want to know what people are saying about their product, their industry, or even their competition, apps like Twendz can become powerful tools in the hands of marketers who realize the market-hive is always abuzz with the hum of communication.

Trendsmap.com allows location-based, real-time monitoring of what’s being tweeted in specific places. Take a look at the homepage and see what you can determine just from the U.S. map in general. Could be a useful tool in the classroom to teach critical thinking skills such as higher order extrapolation. Twazzup.com uniquely allows the viewing of real-time tweets according to specific keywords and displays them in a nice page that includes photos, news, and the most popular links related to that keyword. Here’s an example using Haiti.

Both TweedGrid and Monitter allow users to create dashboards of keyword-specific twitter feeds that update in real-time. There is an ever-increasing host of apps that are seeking new ways to mashup the Twitter data stream and output in some unique fashion. With Geo-location api’s added to the mix, forthcoming apps should prove to be quite interesting to say the least.

Augmenting our daily routine, whether personal, social, academic or business, is the new reality we all face. Fresh views of what’s going on around us in real-time, is sure to open our eyes to mundane experiences we’ve been taking for granted. What cool, new twitter apps have you been using to augment your real-time learning?

Categories
Education

Tomorrow’s Web is trending: Social, Media-rich, Ubiquitous, and Computer-free

Did you know that it’s been nearly twenty years since the first website was placed online? Have you ever thought about how the Internet and the web have evolved in time?

Ponder it: the Internet, a complex series of interconnected networks, protocols, servers, cables, and computers, has evolved from its early days as U.S. Department of Defense research project into the foundation for the World Wide Web, what we use today to interact with one another via browsers, email, Twitter, Skype (Skype), and millions of other online tools.

As we approach the imminent launch of the Apple Tablet and analyze new trends coming out of out of this year’s Consumer Electronics Show (our full coverage), now is good time to reflect on what the web will look like in the next decade — and beyond.

I have four big predictions to share for what the web will look like in the near future. This is what I expect in the evolution of our online lives:
1. The Web Will Be Accessible Anywhere

Our society couldn’t operate today without Wi-Fi, but it didn’t become prevalent until the early to mid-2000s. Before that, we used Ethernet cables and before that, our primary method of connecting to the web was via phone lines. Every few years, our method of accessing the web changes to be faster and more accessible.

Two things make me believe that the web will be accessible from anywhere and at any time: the rise of wireless 3G and 4G networks and the likelihood for nationwide Wi-Fi to blanket the U.S. and beyond.

Let’s first talk about 3G: since its introduction in the early 2000s, it has quickly spread to major cities worldwide. Accessing the web is now as simple as pulling out your smartphone, and it’s getting faster with the introduction of 4G networks and 4G phones. The Apple Tablet is even rumored to have a data plan on Verizon and AT&T’s 3G networks. More and more laptops come with built-in 3G access as well.

Nationwide Wi-Fi is the more exciting prospect, though. In 2008, the FCC had an auction for for the 700 MHz wireless spectrum. A lot of attention was focused on that auction when Google (Google) joined as a multi-billion dollar bidder. Some speculated that Google wanted to turn the spectrum into a nationwide Wi-Fi network. While Verizon eventually won, a nationwide Wi-Fi network is still very possible and, in fact, seems logical given the direction of web technology today.

The point is that more devices will have access to these networks and that these networks will be more prevalent as time goes on. Ten to twenty years down the road, people will wonder how we managed with laptops disconnected from a Wi-Fi or 4G signal.
2. Web Access Will Not Focus Around the Computer

In a column on CNN earlier this month, Mashable’s Adam Ostrow explored one of the biggest trends at CES: the embedding of the web outside of the computer . At present, we focus our Internet use in the U.S. on our laptops. In Japan though, many more access the web primarily through their phones, a trend that is just beginning to sweep the states.

This is just the beginning. New Internet-enabled TVs will allow us to browse from the living room and soon our cars will become Wi-Fi hotspots.

The Apple Tablet looks to be the next stage of this evolution. Rumor has it that not only is the device going to have 3G access, but Apple envisions it is a shared piece of hardware among the family. Instead of having to jump onto the computer to check your email, you can just have your girlfriend or boyfriend pass you the tablet to check out what’s going on.

In ten years, computers will only be a small percentage of how we use our web. We’re going to be accessing it from nearly every device and appliance we own.
3. The Web Will Be Media-Centric

The time of text-based interactions is going to diminish until they’re just a minor component of our web experience. Yes, we will always write, blog, and tweet, but as more and more devices adopt touchscreen interfaces and alternatives to the keyboard and mouse (it’s already happening), our reliance on videos from YouTube (YouTube) and Hulu (Hulu), social games like FarmVille, and interactive interfaces like the iPhone OS will grow rapidly.

Here are some of my thoughts on how I think this media-centric web will come to be:

– Voice-to-text technology will be a major part of the media-centric web. The technology isn’t accurate enough to use daily yet, but devices like the Nexus One are pushing its limits. In a decade or two, it’ll be accurate enough to be a viable replacement to our keyboards.
– Interfaces that rely on motions are going to be more important to computing and the media-focused web. Apple popularized phone touchscreen interfaces, and the Tablet has a good shot and popularizing that type of interface on larger-sized screens. While we have a lot more to figure out before touchscreens are popularized on the desktop, I do think it’s time isn’t far away. I look forward to abandoning the old mouse and keyboard interface.

– In the future, you won’t even have to touch the screen. HP’s “Wall of Touch” actually doesn’t require users to touch the screen in order to interact with it, and Microsoft’s Project Natal looks to turn gaming into a controller-less experience. This is the future.

– These interfaces simply make it easier to bring up images, videos, music, and other multi-media. It’s not about keyboard commands, but about apps, drag-and-drop, and having an immersive experience.

4. Social Media Will Be Its Largest Component

Stats published by Nielsen show that social media usage has increased by 82% in the last year, an astronomical rise. Facebook (Facebook), Twitter (Twitter), YouTube, blogs, and social interaction are becoming the focus of our online interactions, even more than search.

We’re social creatures, so it was only a matter of time until we figured out how to make the web an efficient medium for communication, sharing, and forging friendships. Now that we’re finally implementing the social layer though, it’s tough to find a scenario where the rise of social media doesn’t continue.

In ten years, when you access the web, most of the time you spend will be to connect with your friends. Almost all of that will be on social networks and through social media. It will be the #1 reason why we ever pull out our phones, tablets, or computers.

Categories
General

The mind-blowing possibilities of quantum computing explained

Getting meaningful results from a quantum computer requires what can only be described as a little magic.

Traditional computers – from your desktop PC to the supercomputers that IBM builds when it’s showing off – all use a system of switches that can be either on or off. We represent this binary state with a 1 or a 0.

Quantum computers are different in that they can be in both of these states at the same time. These states are called ‘superpositions’.

The basic unit of a quantum computer is a quantum bit or ‘qubit’, and their ability to be in two simultaneous states is what makes quantum computers so fast. Sound more like magic than science? Read on, and you’ll discover that despite all the arcane physics, a working practical quantum computer could be just around the corner.

The past, present and future of AI

Interest in quantum theory and its application to computation is partly a result of work carried out by the mathematician Peter Shor. He developed an algorithm that could factor large numbers using a quantum computer.

The possible speed of this algorithm shows the potential of the technology. Shor’s algorithm is so powerful that it holds the promise of cracking the supposedly watertight encryption you and I use when doing internet banking, something that no conventional computer has come close to.

Indeed, the potential processing power of quantum computers truly boggles the mind. Because a quantum computer essentially operates as a massive parallel processing machine, it can work on millions of calculations simultaneously (whereas a traditional computer works on one calculation at a time, in sequence).

Blue gene

ALREADY SLOW: The IBM Blue Gene supercomputer is as powerful as a ZX81 next to a quantum computer

A 30-qubit quantum computer would have around the same processing power as a conventional computer processing commands at 10 teraflops per second. By way of contrast, current desktop computers operate at mere gigaflops-per-second speeds.

Nuts, bolts and electrons

This sounds great, so why aren’t we all using them? The answer is that, at present, a working quantum computer capable of solving real-world problems is still firmly on the drawing board. To see why producing a proper machine is so hard, we need to go back to basics.

Electrons, photons and atoms form the memory and processor of the quantum computer. These comprise the magical qubits. Understanding, building and manipulating these qubits is the really tricky part of getting a quantum computer to function. It could even be said that the quantum computer exists in a parallel universe to our own.

When the computer works on a problem that you’ve given it, the calculations are performed within this parallel universe until an answer is presented. But it doesn’t stop there. You can’t just see the answer when the calculation is complete. In fact, you can’t see the answer at all until you actually go looking for it. And when you do look for it, you could disturb the state the quantum computer is in and end up getting a corrupted result.

All the parallel calculations that the quantum computer is doing don’t actually collapse down to a final answer until you consciously try to observe it. In some ways, then, it’s not the answer itself that’s important, but how you get hold of it. It’s this observational component of the quantum computer that forms the biggest obstacle to actually building one.

Physicists refer to this problem as ‘entanglement’, what Einstein called “spooky action at a distance”. Entanglement is, in essence, the result of observing how one qubit behaves based on the state of another qubit.

Dr isaac chuang

WHAT? NO MOUSE?: Dr Isaac Chuang loads a vial containing the seven-qubit quantum computer molecules into the nuclear magnetic resonance apparatus

What causes headaches is that as soon as you look at one qubit, you change its state and the entire system collapses back into being a standard digital computer. This is known as ‘decoherence’, and is what makes the observations or results you’re looking at inaccurate or misleading.

For these complex reasons and many others, actually building a working quantum computer that can solve real-world problems is far from easy.

Despite the difficulties, however, there has been progress in several areas of quantum computing. As the state of a qubit is, in effect, outside of the physical universe, the quantum computer can move away from classical computer designs using transistors connected by microscopic wires.

Moore’s Law has so far delivered massive growth in computer processing power as transistors and the connections between then become smaller with each passing year. However, things are starting to change, and solid-state quantum computers look set to bridge the gap between traditional transistor based computers and their quantum cousins.

In a quantum computer, the computations are carried out by an exchange of information between individual qubits. This exchange of information is achieved by teleportation. This doesn’t mean that a qubit, such as an atom or photon, is ‘dematerialised’ à la Star Trek, but that the properties of one qubit are transferred to another. This has been achieved at the University of Vienna and the Austrian Academy of Science.

An optical fibre was used to connect lab buildings that were situated apart from each other across the river Danube. The lab was able to teleport qubits of information using a technique called polarisation.

They succeeded in exploiting the entanglement phenomenon, which meant that two particles were tied together when in fact they’re physically separate – the spooky distance that Einstein talked about. The particles existed in a parallel universe where they were able to change their state.

As a result, they could exchange information, which is just what they would need to do in order to make meaningful calculations. So how far away are we from building working quantum computers?

Actually, we have already constructed some of these near-mythical machines, even though they’ve employed relatively few working qubits. The earliest example was built in 1998 by scientists working at MIT and the University of Waterloo. It only had three qubits, but it showed the world that quantum computers were not just a fairy tale that physicists told their children.

Two years later, a seven-qubit quantum computer that used nuclear magnetic resonance to manipulate atomic nuclei was built by Los Alamos National Labs. 2000 was also the year that IBM proved it too could build a quantum computer. Dr Isaac Chuang led the team that built a five-qubit quantum computer which enabled five fluorine atoms to interact together.

The following year saw IBM once again demonstrate a working quantum computer. This time the firm was able to use Shor’s algorithm. IBM used a seven-qubit quantum computer to find the factors of the number 15.

A more complex quantum computer was also built in 2006 by MIT and Waterloo, and in 2007 a company called D-Wave burst onto the market with what it claimed was the world’s first 16-qubit quantum machine.

D-Wave

RIDE D-WAVE: D-Wave Systems’ 16-qubit quantum computer is the subject of much debate

D-Wave has yet to prove that its system is a true quantum computer, but this year also saw a team at Yale build the first solid-state quantum processor. The two-qubit superconducting chip was able to perform some basic calculations.

The significance of this development by Yale’s scientists is that it shows that a quantum computer can be built using electronics not that dissimilar to the components found in your desktop PC.

Yale’s system used artificial atoms that could be placed in the superpositional state quantum computers require. Until this development, scientists could not get a qubit to last longer than a nanosecond.In comparison, the Yale qubit lasted microseconds. This is long enough to perform meaningful calculations.

Scientists working at the Universities of Manchester and Edinburgh have combined tiny magnets with molecular machines to create what could end up being the building blocks for future quantum computers. Professor David Leigh of the University of Edinburgh’s School of Chemistry said:

“This development brings super-fast, non-silicon-based computing a step closer. The magnetic molecules involved have potential to be used as qubits, and combining them with molecular machines enables them to move, which could be useful for building quantum computers. The major challenges we face now are to bring many of these qubits together to build a device that could perform calculations, and to discover how to communicate between them.”

Looking forward to that goal, one of the most promising developments in the field is quantum dots. These are nano-constructions made of semiconductor material. As such, we can use many of the techniques that we now use to build traditional computers to harness quantum dot technology.

It may be possible to manufacture quantum dots in much the same way as we currently manufacture microprocessors. If the technology were successful, we could build quantum computers with as many qubits as we need. As things stand it’s still too early to make complete logic gates from quantum dots, but the technology looks very promising indeed.

The supercomputers we have today look like abacuses when compared to the processing power that quantum computers promise. With so many different avenues being explored by scientists, the final working structure of the quantum computer has yet to be realised.

What recent work does show is that it’s a realistic ambition to build a commercial quantum computer over the next few years. When that power arrives, we’ll see a truly quantum shift in how we all manipulate information.