The push for paperless wellbeing records is in excess of an innovative headway. It began with the Health Information Technology for Economic and Clinical Health (HITECH) Act, instituted by President Obama in 2009, and administered as a major aspect of the American Recovery and Reinvestment Act of 2009 (ARRA), or Stimulus Act, which assigned $36.5 billion for the medicinal services industry to change over electronic therapeutic records (EMR) from paper to electronic structure. This incorporates assets to enlist EHR/EMR sellers and experts and give motivating forces to Medicare and Medicaid suppliers pushing toward EMR usage. As EHR execution is upheld, future motivators will be made accessible to EMR suppliers. The usage cutoff time for therapeutic information transformation is 2015.
EHR: The Pros and Cons
EHR points of interest are various. A key explanation is a thought that EHRs can spare lives by diminishing human blunder. For instance, if doctors and attendants have significant electronic therapeutic data readily available, fewer deferrals are normal in new treatment and basic consideration. EHRs are likewise helpful to patients, with the end of the dreary procedure of revealing therapeutic chronicles to new guardians.
Secrecy laws, in any case, need further investigation, and EHR security may be tweaked, particularly as far as to conduct and substance misuse records. Another issue is EHR usage in littler medicinal practices with practically zero IT support.
Since they will in general messenger autos which will, for the most part, be sought after, declarations by General Motors’ Vauxhall/Opel division of new vehicles for the UK showcase normally get heaps of consideration.
This, however, it should likewise be recollected that, with its huge processing plant at Ellesmere Port, Cheshire, any news that another model is probably going to be created there makes certain to be a significant fillip to the 1,650 or more specialists there.
Truth be told, Vauxhall itself says in its own exposure that the Wirral site “is an indispensable segment of the Vauxhall/Opel business”.
So the news that there will be another electric vehicle delivered at the processing plant – yet that it won’t be the vehicle producer’s flow Ampera-e model – appears to impart a blended sign over the organization’s arrangements for its electric vehicle go.
What appears to have been chosen is that the Ampera-e will be delivered essentially for the American market – where the organization appears to see incredible potential in intensely pushing the vehicle as an opponent to the Toyota Prius.
Addressing Autocar magazine at the Paris Motor Show in 2016, Opel’s head of showcasing, Tina Muller, stated: “When the Chevrolet Bolt [which the Ampera-e is based on] was created years back, I think the entire electro-portability advertise at the time was extremely modest and specialty.”
The Bad News…
At the end of the day, it didn’t consider that it is reasonable to create a right-hand drive Ampera – which, considering a portion of the surveys the vehicle got in the three years (2012-14) when it was on special in the UK, some may think an odd choice.
When his test drive analyst had become accustomed to the vehicle being practically quiet at practically any speed, Honest John was very dazzled with the Ampera at that point, commenting that the vehicle was “formed at speed, much over genuinely unpleasant surfaces, and it stays calm with simply the interruption of street clamor ending the quietness.” Nevertheless, he included: “With no motor commotion, it’s a significant strange encounter.”
Top Gear, as well, thought the vehicle had a lot to prescribe it, strikingly that “with 273lb/ft on tap, the Ampera is actually very strong, particularly from the off, and there’s no compelling reason to wrap the fires up to arrive at top pulling power. It’s useful for 0–62mph in nine seconds level, and there’s even a nice measure of push when you’re going at motorway speeds.”
All things considered, each audit we’ve seen referred to the value – up to £35,440 – as a significant hindrance to Vauxhall’s endeavors to locate a feasible market for the Ampera. Taking into account that the most recent Prius is accessible from £23,295, you can perceive any reason why Vauxhall presumably thought it was on a stowing away to nothing on the off chance that it attempted to pit the Ampera legitimately facing it.
Module Vauxhall Will Come… But We Don’t Know When
Be that as it may, obviously Vauxhall believes there’s a distinct market for a vehicle with the in addition to purposes of the Ampera in the UK, from Tina Muller’s own remarks. “We understand that electro-portability will increase and greater and that is the reason we have to do a subsequent advance, one that will incorporate right-hand drive,” she told Autocar.
In any case, when she at that point included: “I can’t let you know precisely when it will hit the market, however without a doubt it’s a piece of our arrangements,” it may be the case that the organization is very much aware that the large value hole between Vauxhall’s electric vehicle, presently stopped in the UK, and its greatest opponent is a significant issue to be tended to ‘from the base up’ when it’s building up the electric vehicle which it will in the long run put at a bargain here.
A Completely New Car
Indeed, there is to be a right-hand drive electric vehicle delivered by Vauxhall – however, it will be one which will be created starting from the earliest stage explicitly for the European and British markets.
Thus, Muller has would not be drawn on when we’re probably going to at long last observe a comparable to the Ampera at a bargain again on these shores.
“Presently we understand that electro-portability will increase and greater and that is the reason we have to do a subsequent advance, one that will incorporate right-hand drive,” she stated, “I can’t let you know precisely when it will hit the market, yet without a doubt, it’s a piece of our arrangements.”
So it’s back to the point where it all began, as Vauxhall attempts to deal with the information that, while British purchasers are all around arranged towards the advantages of module driving, there is an utmost to the value they’re set up to pay for the innovation.
Advantages Are Self-Evident
At the point when you consider, however, that the Ampera, when tried, accomplished a whacking efficiency figure of 235mpg, unmistakably a few clients – especially high-mileage organization vehicle administrators and drivers – would greet a vehicle with this innovation wholeheartedly, if the value was correct.
Top Gear, however, was determined that the vehicle was no simple Prius clone: “Don’t associate it with a similar reputation as the Toyota Prius and other existing half and halves that offer unimportant advantages, as the Ampera moves the game along a lot further,” it said.
Given such a positive gathering, you’d be excused for believing that Vauxhall would be in some rush to profit by the essential innovation available to its and including in the Ampera – regardless of whether it was dressed in a totally extraordinary shell.
In any case, with GM having apparently reasoned that “the numbers [for the Ampera-e] simply didn’t pile up”, in any event undoubtedly, the organization has returned to the planning phase.
There are two different ways of taking a gander at this position by the organization: you could state that GM has concluded that the reasonable little deals potential for an EV at the top-notch end of the market where the Ampera was sat given its cost didn’t warrant it being offered as an independent model, thus acclaim it for its trustworthiness; or marvel why, in the event that it is not kidding about contending in the EV advertise, it hasn’t just furrowed more assets into creating one with which it can mount a genuine test right now.
The Society of Motor Manufacturers and Traders’ figures showed that offers of EVs were up by 31.8 percent in the principal half of 2016 over a similar period a year sooner.
That in itself is a noteworthy figure, yet must be placed in the setting that, even at this level implied that, of a sum of 1.42million new autos enlisted among January and June 2016, just 19,252 – that is generally 1.36 percent – were qualified for the module electric vehicle award.
No Half Measures – But Is The UK Prepared To Wait?
These figures would appear to legitimize GM’s view that delivering another vehicle without any preparation was desirable over-contribution a right-hand drive subordinate of a vehicle which was, by numerous benchmarks, seen as uncompetitive against its principle rivals.
In any case, this is obviously a bet on GM’s part, and there’s constantly a hazard that, in holding off on creating and discharging an EV pointed solidly at the UK showcase, it will invite rival makers to gain a sudden advantage over the car mammoth.
What the laborers in Cheshire and North Wales will plainly be trusting is that, when it arrives, purchasers think the vehicle merited the pause and will be quick to attempt it for themselves. There’s no getting away from the way that, at whatever point GM settles on choices on whether to create another model for a particular market, there are suggestions for employments someplace on the planet, such is the sheer size of the organization.
As companies compete to digitize the textbook market, there is one approach that shakes the traditional publishing business model: open-source textbooks, whose proponents believe online educational tomes should be free.
Many universities, including MIT and Carnegie Mellon, post-course lectures online for free use. A New York Times article last year explained some of the barriers to applying the same approach to textbooks.
For one thing, the textbook authors must agree to have them distributed online without charging royalties — something that may work well in the software world, where engineers often work on projects while keeping a day job, but typically avoided by writers who put their sweat equity into one book at a time. Also, books for K-12 classrooms must meet state standards, and most states don’t have procedures in place for approving open-source textbooks.
But there’s no arguing that having at least a few open-source textbooks (even when purchased through companies like Flat World Knowledge that charge for downloading or printing them) would cut down on the average $900 per year that the average student spends on textbooks. Online School has compiled this infographic to explain the cost savings.
Google confirmed at its I/O developer conference Wednesday that it will lease Chrome laptops to schools and businesses.
The program, called Chromebooks for Business & Education allows businesses and educational institutions to pay Google a monthly fee in exchange for a supported, updated Chromebook. Business users will pay $28 a month for each Chromebook and educational institutions will pay $20 a month per unit.
Here are the details of the subscription plan as we know them now:
Schools and businesses will order their Chromebook units directly from Google at google.com/chromebook, beginning June 15, 2011.
Google will cover phone, email and hardware replacement for Chromebook Business & Education users.
IT managers can use virtualization platforms like Citrix to offer access to non-web apps, essentially making the Chromebook a thin client on steroids.
Administrators have various management options for configuring and monitoring lots of different units.
Although the potential for education sales is vast — especially at the $240-a-year price point — we think the bigger play for Google is in the business space.
Google has actively courted small, medium and even large businesses to migrate to Google Apps for email, cloud-based document management/creation and file access. The company has successfully wooed some high-profile customers — especially in regards to email — but Google still trails enterprise giants like Microsoft, Novell, Oracle and IBM.
If the Chromebook subscription offering works, it could really make Google a big player in this space. Of course, that is a big “if.” The promise of thin clients and network computers are nearly as old as the web itself. The Chromebook may be the best implementation of Larry Ellison’s vision to date.
It’s certainly an interesting idea, that’s for sure.
Check out this video Google put together explaining the Chromebook for Business & Education:
Would you be willing to pay a monthly fee to “rent” a Chromebook?
The Good, the Bad, and the Ugly are still part of the incoming tide but Time will level the playing field once the dust of 20th Century marketing ploys settles and people begin to demand quality over the distracting dissatisfaction of empty, entertainment-filled promises. The masses are tired of being increasingly informed and entertained and decreasingly enabled and empowered for critical thought and deep learning.
1. Ubiquitous – The Internet has brought us a host of online education choices including everything from unscrupulous diploma mills to a myriad of so-called learning games as well as apps for nearly every subject, platform, and device. Online learning platforms such as Udemy, Coursera and Skillshare, offer students the opportunity to study remotely for a fraction of the cost of traditional establishments. Students can save as much as 90% off many courses by using a udemy coupon, and other learning sites have similar offers. This trend will continue. The challenge to identify quality amidst the quantity will grow.
2. Social – Face it, we are social creatures and the only things we learn well in isolation are survival techniques (and even then, we wish we had some others to help us). The Internet’s social layer is solidly in place so expect to see education delivered more broadly on the social grid.
3. Mobile – The mobile generation will expect mobile access to all matters beyond mere communication and game-playing. Devices are personal links to best practices and apps will be developed to meet the ever-increasing demand.
4. Pushed – Traditional supply-side education will continue to lose ground to demand-side education where location-aware apps push just-in-time learning. Instructional design should take advantage of push technology.
5. Personalized – Learning will be personalized more and more in the way of both eportfolio content creation as well as learner-specific and contextually relevant assessment. Department of Education initiatives include plans to aggregate student achievement from cradle to grave and this data will empower apps to deliver personalized learning experiences.
6. Media-rich – Whether we agree with it or not, future literacy will demand our reading of symbols that go beyond mere letters on a page. The digital landscape requires a broader skillset than previous generations learned.
7. Computer free – Web 1.0 was platform and software specific. Web 2.0 has been rather device-centric. However, technology has a way of becoming invisible with wide-scale adoption. Expect the same in the education arena. The Internet of things will include more than kitchen appliances. The tools of the education trade will integrate smart technologies to seamlessly deliver interactive experiences previously relegated to traditional face-to-face settings.
8. Relevant – Thanks to GPS chips, technology will afford customized delivery of learning opportunities contextually relevant to the learner.
9. Augmented – Emerging technological innovations are adding ways for learners to interact with the subject matter in ways previously unavailable. Virtual field trips enable learners to transcend time and space barriers. Virtual technologies allow learner avatars to transcend identity barriers.
10. Layered – Just as the social layer has been added to the globally networked world, and just as a game layer is being constructed as I write this, watch for an education layer to be integrated where Like and Comment buttons may be accompanied by a Learn This button (Similar to Apture’s Learn More plugin but more developed).
These trends will continue while civilization continues to transition from the industrialized model of nation-state institutions to the globally networked collaborative model. Despite the fact that only half the world is Internet-worked at present, an ignorant populace can only be distracted by superficial entertainment and/or narrow cultural indoctrination for so long. Eventually, the thirst for fulfillment will drive the demand for genuine and deep learning on a global scale. Will greed give way to good?
“Fun-loving math?” Yes! For little youngsters, play and science go together consistently and normally. With straightforward materials and a bit of arranging, early youth teachers can utilize games to help flash significant numerical thoughts—and assist them with learning a great deal about youngsters’ reasoning.
Why Math Games?
Math games give a structure and procedure to youngsters to participate in critical thinking so as to arrive at a specific objective or goal. Arriving at that objective may be testing, however the test is additionally what makes game-playing fun. In a game, kids can play alone or with a gathering, they can settle on their own choices about the moves they will make, and they can play again and again evaluating various methodologies.
Notwithstanding all the central science picking up going on while they mess around, kids are building their certainty as issue solvers and rehearsing significant social-passionate aptitudes. Games in the preschool study hall additionally give instructors the chance to pick up bits of knowledge into kids’ creating numerical reasoning.
Diligence and critical thinking Games are a perfect vehicle for youngsters to rehearse perseverance and critical thinking as they evaluate new procedures and experience challenges. They can perceive what works and why, and attempt again without the weight of doing it “right.” Teachers can bolster the advancement of kids’ determination at testing games and of their certainty as issue solvers.
Social-passionate improvement Playing games with cohorts encourage social and enthusiastic aptitudes like showing restraint, alternating, and tackling issues cooperatively. Games with a component of rivalry additionally offer youngsters a chance to work on winning and losing benevolently and with deference.
Educator perception As kids take part in game-play, preschool instructors have a rich chance to watch kids’ reasoning, thinking, and math aptitudes at work. For instance, as a kid moves a game piece along a number way in a tabletop game, the educator can see whether the kid can discuss the number of successions precisely and keep up a coordinated correspondence.
Reiteration and practice Finally, games give youngsters rehashed practice, as they appreciate playing similar games again and again.
What’s the Math?
Utilizing games as an establishment for learning, and with contribution from more than one hundred Head Start instructors, EDC’s Games for Young Mathematicians has built up a lot of instructional materials for preschool educators that are fun, drawing in, and simple to-utilize. Games and going with assets on the Young Mathematicians site center all in all around tallying, activities, arithmetical reasoning, and geometry.
These games are intended to be versatile, with various sections focuses so they are fittingly hard for all kids. For instance, youngsters who are toward the start of their science learning can connect with little numbers and less complex variants of the games; kids who are further along are tested with extra decisions (e.g., you can include or subtract the amounts a bite the dust).
The following is an example of games concentrated on number sense. Number and task abilities are central for later arithmetic learning  and speak to a huge segment of early youth math learning principles. The accompanying games are models that feature these key number and activity abilities:
perceiving and realizing the number word list,
subitizing (knowing what number of in a little set right away),
cardinality (realizing that the last number of articles included is the number in the set),
coordinated checking correspondence (blending one article with precisely one number word),
composed number images,
looking at numbers (progressively, less, same), and
creating and breaking down numbers (expansion and subtraction).
Games with Fingers
Straightforward games with fingers center around key abilities including checking, cardinality, subitizing, and joining and dismantling sets. New research shows that utilizing fingers assumes a significant job in learning and getting math (in spite of what we might’ve been told!).  You can likewise play finger games whenever—since your fingers are consistent with you!
Attempt this basic finger game. Conceal your hands behind your back, at that point show your hands holding up a couple of fingers on each hand. For instance, show three fingers on your correct hand and two fingers on your left hand. Kids love it when you serenade a little rhyme before uncovering your fingers: “Fingers, fingers, 1,2,3, how many fingers do you see?” Children at that point get out what number of fingers you are holding up.
To make the game somewhat more testing, request that youngsters utilize two hands and show five of every an alternate way. You can likewise ask kids to show you on their fingers one more or one less than the number of fingers you held up. As youngsters get more established and have more practice, you can even ask them what number of fingers you are not raising.
Games with Dot Cards
Dab cards give an abundance of game choices for little youngsters to work on subitizing, tallying, and cardinality. Youngsters use cards that have from one to ten black dabs orchestrated in various configurations—direct (straight line), rectangular, dice example, round, and dispersed. The specks are masterminded in various designs in light of the fact that the assortment assists kids with creating numerous psychological pictures of amounts. Kids need practice with objects in heaps of various courses of action. Specifically, round and dissipated plans are more diligently to tally individually.
Youngsters can likewise play with these cards in various manners: covering spots, duplicating the examples, coordinating cards, finding a specific card, and making sense of one more or one less. For a book interface, we recommend the image book Ten Black Dots by Donald Crews. Kids can utilize this as a motivation to make their own outlines with specks at the workmanship table. For a home connection, educators can send home the smaller than normal book Can You Find? for youngsters to peruse with their families. On each page, youngsters attempt to discover the card with a specific number of dabs.
Bouncing on the Lily Pads
Kids play Jumping on the Lily Pads with a lily cushion number-way board, spot solid shapes, and frog game pieces (or different tokens). This game assists youngsters with building up a psychological number line and comprehend that entire numbers are separated similarly along a number line. The more that kids build up a psychological number line, the more set they up are for the math errands that anticipate them in kindergarten.
To play the game, kids alternate rolling the speck block and moving their frogs along the game board. The objective is to be the first to arrive at the lake. While playing this game, kids are rehearsing numeral acknowledgment, utilizing coordinated correspondence (while proceeding onward the board), and utilizing jargon, for example, previously, after, closer, further. Hopping on the Lily Pads give understudies involvement in a numerically significant device—the number line—and vital utilization of that instrument.
As you bring more math into your study hall, be certain! Your frame of mind matters so has some good times presenting and coordinating math exercises. Use books and games to advance perky math encounters. It’s as simple as playing finger games, games, and tabletop games to fabricate small kids’ comprehension of numbers and the number line!
Twitter is a web-based application interface that allows users to feed the data stream with their own generated input. Because the Twitter data stream is so large now (75 million accounts by end of 2009), there are plenty of useful ways to navigate that data.
Clearly the Twitter interface can be used to add your own input for any of the following reasons:
To simply participate in the new phenomenon of tweeting
To communicate with your followers or group
To communicate with the World-at-large
To archive your online and/or offline activities
For branding a product or service
Because it’s required by a school assignment
There are many tools for adding your own content either directly via Twitter.com’s interface, or by using any host of apps that enable multiple posting inputs such as Tweetdeck.com or Posterous.com that allow for a single post to enter the Twitter stream as well as update your Facebook or Linkedin status, your blog, your photo sharing app, etc. SocialOomph.com, Tweetlater.com, others like them, offer the ability to time-delay tweet postings as well as automate replies to new followers, etc. Apps like Grouptweet.com and Present.ly enable the formation of private groups for both input and output benefits.
These multiple input apps are attempting to simplify the needs of active Internet users. Is it a sustainable model? Time will tell but for now, it’s necessary as the crowd filters out the superfluous and drills down to the preferred mechanisms for communication.
However, an important alternative to merely adding to the data stream (input) is the myriad of ways people are using Twitter to monitor the output. There are many ways to listen to the data Twitter is streaming. Reasons to listen include:
Simply to watch the stream as it flows by
To stay informed
To monitor trends
To mine data as its own resource
Actually, new apps being developed every week enable more ways to listen to the babbling data stream. Twendz.com helps focus on crowd sentiment based on user-chosen keywords. Useful for businesses who want to know what people are saying about their product, their industry, or even their competition, apps like Twendz can become powerful tools in the hands of marketers who realize the market-hive is always abuzz with the hum of communication.
Trendsmap.com allows location-based, real-time monitoring of what’s being tweeted in specific places. Take a look at the homepage and see what you can determine just from the U.S. map in general. It could be a useful tool in the classroom to teach critical thinking skills such as higher-order extrapolation. Twazzup.com uniquely allows the viewing of real-time tweets according to specific keywords and displays them in a nice page that includes photos, news, and the most popular links related to that keyword. Here’s an example using Haiti.
Both TweedGrid and Monitter allow users to create dashboards of keyword-specific twitter feeds that update in real-time. There is an ever-increasing host of apps that are seeking new ways to mashup the Twitter data stream and output in some unique fashion. With Geo-location API’s added to the mix, forthcoming apps should prove to be quite interesting, to say the least.
Augmenting our daily routine, whether personal, social, academic or business, is the new reality we all face. Fresh views of what’s going on around us in real-time, is sure to open our eyes to mundane experiences we’ve been taking for granted. What cool, new twitter apps have you been using to augment your real-time learning?
Did you know that it’s been nearly twenty years since the first website was placed online? Have you ever thought about how the Internet and the web have evolved in time?
Ponder it: the Internet, a complex series of interconnected networks, protocols, servers, cables, and computers, has evolved from its early days as U.S. Department of Defense research project into the foundation for the World Wide Web, what we use today to interact with one another via browsers, email, Twitter, Skype (Skype), and millions of other online tools.
As we approach the imminent launch of the Apple Tablet and analyze new trends coming out of this year’s Consumer Electronics Show (our full coverage), now is a good time to reflect on what the web will look like in the next decade — and beyond.
I have four big predictions to share for what the web will look like in the near future. This is what I expect in the evolution of our online lives:
The Web Will Be Accessible Anywhere
Our society couldn’t operate today without Wi-Fi, but it didn’t become prevalent until the early to mid-2000s. Before that, we used Ethernet cables and before that, our primary method of connecting to the web was via phone lines. Every few years, our method of accessing web changes to be faster and more accessible.
Two things make me believe that the web will be accessible from anywhere and at any time: the rise of wireless 3G and 4G networks and the likelihood for nationwide Wi-Fi to blanket the U.S. and beyond.
Let’s first talk about 3G: since its introduction in the early 2000s, it has quickly spread to major cities worldwide. Accessing the web is now as simple as pulling out your smartphone, and it’s getting faster with the introduction of 4G networks and 4G phones. The Apple Tablet is even rumored to have a data plan on Verizon and AT&T’s 3G networks. More and more laptops come with built-in 3G access as well.
Nationwide Wi-Fi is the more exciting prospect, though. In 2008, the FCC had an auction for the 700 MHz wireless spectrum. A lot of attention was focused on that auction when Google (Google) joined as a multi-billion dollar bidder. Some speculated that Google wanted to turn the spectrum into a nationwide Wi-Fi network. While Verizon eventually won, a nationwide Wi-Fi network is still very possible and, in fact, seems logical given the direction of web technology today.
The point is that more devices will have access to these networks and that these networks will be more prevalent as time goes on. Ten to twenty years down the road, people will wonder how we managed with laptops disconnected from a Wi-Fi or 4G signal.
Web Access Will Not Focus Around the Computer
In a column on CNN earlier this month, Mashable’s Adam Ostrow explored one of the biggest trends at CES: the embedding of the web outside of the computer. At present, we focus our Internet use in the U.S. on our laptops. In Japan though, many more access the web primarily through their phones, a trend that is just beginning to sweep the states.
This is just the beginning. New Internet-enabled TVs will allow us to browse from the living room and soon our cars will become Wi-Fi hotspots.
The Apple Tablet looks to be the next stage of this evolution. Rumor has it that not only is the device going to have 3G access, but Apple envisions it is a shared piece of hardware among the family. Instead of having to jump onto the computer to check your email, you can just have your girlfriend or boyfriend pass you the tablet to check out what’s going on.
In ten years, computers will only be a small percentage of how we use our web. We’re going to be accessing it from nearly every device and appliance we own.
The Web Will Be Media-Centric
The time of text-based interactions is going to diminish until they’re just a minor component of our web experience. Yes, we will always write, blog, and tweet, but as more and more devices adopt touchscreen interfaces and alternatives to the keyboard and mouse (it’s already happening), our reliance on videos from YouTube (YouTube) and Hulu (Hulu), social games like FarmVille, and interactive interfaces like the iPhone OS will grow rapidly.
Here are some of my thoughts on how I think this media-centric web will come to be:
Voice-to-text technology will be a major part of the media-centric web. The technology isn’t accurate enough to use daily yet, but devices like the Nexus One are pushing its limits. In a decade or two, it’ll be accurate enough to be a viable replacement to our keyboards.
Interfaces that rely on motions are going to be more important to computing and the media-focused web. Apple popularized phone touchscreen interfaces, and the Tablet has a good shot and popularizing that type of interface on larger-sized screens. While we have a lot more to figure out before touchscreens are popularized on the desktop, I do think it’s time isn’t far away. I look forward to abandoning the old mouse and keyboard interface.
In the future, you won’t even have to touch the screen. HP’s “Wall of Touch” actually doesn’t require users to touch the screen in order to interact with it, and Microsoft’s Project Natal looks to turn gaming into a controller-less experience. This is the future.
These interfaces simply make it easier to bring up images, videos, music, and other multi-media.
It’s not about keyboard commands, but about apps, drag-and-drop, and having an immersive experience.
Social Media Will Be Its Largest Component
Stats published by Nielsen show that social media usage has increased by 82% in the last year, an astronomical rise. Facebook (Facebook), Twitter (Twitter), YouTube, blogs, and social interaction are becoming the focus of our online interactions, even more than search.
We’re social creatures, so it was only a matter of time until we figured out how to make the web an efficient medium for communication, sharing, and forging friendships. Now that we’re finally implementing the social layer though, it’s tough to find a scenario where the rise of social media doesn’t continue.
In ten years, when you access the web, most of the time you spend will be to connect with your friends. Almost all of that will be on social networks and through social media. It will be the #1 reason why we ever pull out our phones, tablets, or computers.
Getting meaningful results from a quantum computer requires what can only be described as a little magic.
Traditional computers – from your desktop PC to the supercomputers that IBM builds when it’s showing off – all use a system of switches that can be either on or off. We represent this binary state with a 1 or a 0.
Quantum computers are different in that they can be in both of these states at the same time. These states are called ‘superpositions’.
The basic unit of a quantum computer is a quantum bit or ‘qubit’, and their ability to be in two simultaneous states is what makes quantum computers so fast. Sound more like magic than science? Read on, and you’ll discover that despite all the arcane physics, a working practical quantum computer could be just around the corner.
The past, present and future of AI
Interest in quantum theory and its application to computation is partly a result of work carried out by the mathematician Peter Shor. He developed an algorithm that could factor large numbers using a quantum computer.
The possible speed of this algorithm shows the potential of the technology. Shor’s algorithm is so powerful that it holds the promise of cracking the supposedly watertight encryption you and I use when doing internet banking, something that no conventional computer has come close to.
Indeed, the potential processing power of quantum computers truly boggles the mind. Because a quantum computer essentially operates as a massive parallel processing machine, it can work on millions of calculations simultaneously (whereas a traditional computer works on one calculation at a time, in sequence).
A 30-qubit quantum computer would have around the same processing power as a conventional computer processing commands at 10 teraflops per second. By way of contrast, current desktop computers operate at mere gigaflops-per-second speeds.
Nuts, bolts and electrons
This sounds great, so why aren’t we all using them? The answer is that, at present, a working quantum computer capable of solving real-world problems is still firmly on the drawing board. To see why producing a proper machine is so hard, we need to go back to basics.
Electrons, photons and atoms form the memory and processor of the quantum computer. These comprise the magical qubits. Understanding, building and manipulating these qubits is the really tricky part of getting a quantum computer to function. It could even be said that the quantum computer exists in a parallel universe to our own.
When the computer works on a problem that you’ve given it, the calculations are performed within this parallel universe until an answer is presented. But it doesn’t stop there. You can’t just see the answer when the calculation is complete. In fact, you can’t see the answer at all until you actually go looking for it. And when you do look for it, you could disturb the state the quantum computer is in and end up getting a corrupted result.
All the parallel calculations that the quantum computer is doing don’t actually collapse down to a final answer until you consciously try to observe it. In some ways, then, it’s not the answer itself that’s important, but how you get hold of it. It’s this observational component of the quantum computer that forms the biggest obstacle to actually building one.
Physicists refer to this problem as ‘entanglement’, what Einstein called “spooky action at a distance”. Entanglement is, in essence, the result of observing how one qubit behaves based on the state of another qubit.
Dr isaac chuang
WHAT? NO MOUSE?: Dr Isaac Chuang loads a vial containing the seven-qubit quantum computer molecules into the nuclear magnetic resonance apparatus
What causes headaches is that as soon as you look at one qubit, you change its state and the entire system collapses back into being a standard digital computer. This is known as ‘decoherence’, and is what makes the observations or results you’re looking at inaccurate or misleading.
For these complex reasons and many others, actually building a working quantum computer that can solve real-world problems is far from easy.
Despite the difficulties, however, there has been progress in several areas of quantum computing. As the state of a qubit is, in effect, outside of the physical universe, the quantum computer can move away from classical computer designs using transistors connected by microscopic wires.
Moore’s Law has so far delivered massive growth in computer processing power as transistors and the connections between then become smaller with each passing year. However, things are starting to change, and solid-state quantum computers look set to bridge the gap between traditional transistor based computers and their quantum cousins.
In a quantum computer, the computations are carried out by an exchange of information between individual qubits. This exchange of information is achieved by teleportation. This doesn’t mean that a qubit, such as an atom or photon, is ‘dematerialised’ à la Star Trek, but that the properties of one qubit are transferred to another. This has been achieved at the University of Vienna and the Austrian Academy of Science.
An optical fibre was used to connect lab buildings that were situated apart from each other across the river Danube. The lab was able to teleport qubits of information using a technique called polarisation.
They succeeded in exploiting the entanglement phenomenon, which meant that two particles were tied together when in fact they’re physically separate – the spooky distance that Einstein talked about. The particles existed in a parallel universe where they were able to change their state.
As a result, they could exchange information, which is just what they would need to do in order to make meaningful calculations. So how far away are we from building working quantum computers?
Actually, we have already constructed some of these near-mythical machines, even though they’ve employed relatively few working qubits. The earliest example was built in 1998 by scientists working at MIT and the University of Waterloo. It only had three qubits, but it showed the world that quantum computers were not just a fairy tale that physicists told their children.
Two years later, a seven-qubit quantum computer that used nuclear magnetic resonance to manipulate atomic nuclei was built by Los Alamos National Labs. 2000 was also the year that IBM proved it too could build a quantum computer. Dr Isaac Chuang led the team that built a five-qubit quantum computer which enabled five fluorine atoms to interact together.
The following year saw IBM once again demonstrate a working quantum computer. This time the firm was able to use Shor’s algorithm. IBM used a seven-qubit quantum computer to find the factors of the number 15.
A more complex quantum computer was also built in 2006 by MIT and Waterloo, and in 2007 a company called D-Wave burst onto the market with what it claimed was the world’s first 16-qubit quantum machine.
RIDE D-WAVE: D-Wave Systems’ 16-qubit quantum computer is the subject of much debate
D-Wave has yet to prove that its system is a true quantum computer, but this year also saw a team at Yale build the first solid-state quantum processor. The two-qubit superconducting chip was able to perform some basic calculations.
The significance of this development by Yale’s scientists is that it shows that a quantum computer can be built using electronics not that dissimilar to the components found in your desktop PC.
Yale’s system used artificial atoms that could be placed in the superpositional state quantum computers require. Until this development, scientists could not get a qubit to last longer than a nanosecond. In comparison, the Yale qubit lasted microseconds. This is long enough to perform meaningful calculations.
Scientists working at the Universities of Manchester and Edinburgh have combined tiny magnets with molecular machines to create what could end up being the building blocks for future quantum computers. Professor David Leigh of the University of Edinburgh’s School of Chemistry said:
“This development brings super-fast, non-silicon-based computing a step closer. The magnetic molecules involved have potential to be used as qubits, and combining them with molecular machines enables them to move, which could be useful for building quantum computers. The major challenges we face now are to bring many of these qubits together to build a device that could perform calculations, and to discover how to communicate between them.”
Looking forward to that goal, one of the most promising developments in the field is quantum dots. These are nano-constructions made of semiconductor material. As such, we can use many of the techniques that we now use to build traditional computers to harness quantum dot technology.
It may be possible to manufacture quantum dots in much the same way as we currently manufacture microprocessors. If the technology were successful, we could build quantum computers with as many qubits as we need. As things stand it’s still too early to make complete logic gates from quantum dots, but the technology looks very promising indeed.
The supercomputers we have today look like abacuses when compared to the processing power that quantum computers promise. With so many different avenues being explored by scientists, the final working structure of the quantum computer has yet to be realised.
What recent work does show is that it’s a realistic ambition to build a commercial quantum computer over the next few years. When that power arrives, we’ll see a truly quantum shift in how we all manipulate information.