Behind the success of the new wave of location based mobile apps taking hold around the world is digital mapping. Location data is core to popular ride-sharing services such as Uber and Lyft, but also to companies such as Amazon or Domino’s Pizza, which are testing drones for faster deliveries. Last year, German delivery firm DHL launched its first “parcelcopter” to send medication to the island of Juist in the Northern Sea. In the humanitarian domain, drones are also being tested for disaster relief operations. Better maps can help app-led companies gain a competitive edge, but it’s hard to produce them at a global scale. A few select players have engaged in a fierce mapping competition. Google leads the race so far, but others are trying to catch up fast. Apple has enlarged its mapping team and renewed its licensing agreement with TomTom. TomTom has plans to 3D map European and North American freeways by next year. DHL’s prototype ‘parcelcopter’ is a modified microdrone that costs US$54,900 and can carry packages up to 1.2kg. Wolfgang Rattay/Reuters In Europe, German carmakers Audi, BMW and Mercedes agreed to buy Here, Nokia’s mapping business. The company had been coveted by Uber, which has gained mapping skills by acquiring deCarta and part of Microsoft Bing. Further signs of the fever for maps are startups such as Mapbox, Mapsense, CartoDB, Mapillary, or Mapzen. The new mapping services are cloud-based, mobile-friendly and, in most cases, community-driven. A flagship base map for the past ten years has been OpenStreetMap (OSM), also known as the “Wikipedia of mapping”. With more than two million registered users, OpenStreetMap aims to create a free map of the world. OSM volunteers have been particularly active in mapping disaster-affected areas such as Haiti, the Philippines or Nepal. A recent study reports how humanitarian response has been a driver of OSM’s evolution, “in part because open data and participatory ideals align with humanitarian work, but also because disasters are catalysts for organizational innovation”. A map for the commons? While global coverage remains uneven, companies such as Foursquare, Flickr, or Apple, among others, rely on OSM free data. The commercial uses of OSM primary data, though, do not come without ongoing debate among the community about license-related issues. The steering wheel is seen resting in the middle of the dashboard inside a Rinspeed Budii self-driving electric city car in Geneva. Ard Wiegmann/Reuters Intense competition for digital maps also flags the start of the self-driving car race. Google is already testing its prototypes outside Silicon Valley and Apple has been rumoured to work on a secret car project code named Titan. Uber has partnered with Carnegie Mellon and Arizona Universities to work on vehicle safety and cheaper laser mapping systems. Tesla is also planning to make its electric cars self-driving. The ultimate goal Are we humans ready for this brave new world? Research suggests young people in North America, Australia and much of Europe are increasingly becoming less likely to hold a driver’s license (or, if they do, to drive less). But even if a new generation of consumers were ready to jump in, challenges remain huge. Navigation systems will need to flawlessly process, in real time, position data streams of buildings, road signs, traffic lights, lane markings, or potholes. And all this seamlessly combined with ongoing sensing of traffic, pedestrians and cyclists, road works, or weather conditions. Smart mapping at its best. Legal and ethical challenges are not to be underestimated either. Most countries impose strict limits on testing self-driving cars on public roads. Similar limitations apply to the use of civilian drones. And the ethics of fully autonomous cars is still in its infancy. Autonomous cars probably won’t be caught texting, but they will still be confronted with tough decisions when trying to avoid potential accidents. Current research engages engineers and philosophers to work on how to assist cars when making split-second decisions that can raise ethical dilemmas. But the future of digital maps is not just on the go. Location-based service revenues are forecast to grow to €34.8 billion in 2020. The position data deluge of the upcoming geomobile revolution gives maps a new frontier: big data analytics. As Mapsense CEO Erez Cohen notes: “the industry is much larger than the traditional GIS industry. It’s actually growing at a massive rate, and there are a massive number of new companies that need the services of mapping analytics because they’re generating all this location data.” Digital mapping technology promises to unveil our routines, preferences, and consumer behaviour in an unprecedented scale. Staggering amounts of location data will populate our digital traces and identities. The impact on our lives, organisations, and businesses is yet to be fully understood, but one thing is sure: the geomobile revolution will be mapped. Marta Poblet is VC's Principal Research Fellow, Associate Professor, Graduate School of Business and Law at RMIT University This article was originally published on The Conversation. Read the original article.
Intel has used the 5th anniversary of their purchase of security company McAfee to release a review of how the cybersecurity landscape has changed in that time. There are a number of surprising observations from the report and a few that were expected. Of little surprise has been the continued lack of importance a large number of companies, and individuals, have placed on implementing basic security practices like applying updates to software and implementing policies around passwords. The reasons for this may be that people are “playing the odds” by believing that the risks are relatively small of cybercrime happening to them. It may also be that they simply don’t want to put in the effort or pay for the computer support or advice. Cybercrime as an industry More surprising, to McAfee at least, has been the rapid development of cybercrime into a fully fledged industry with “suppliers, markets, service providers (“cybercrime as a service”), financing, trading systems, and a proliferation of business models”. The growth of this industry has been fuelled by the use of cryptocurrencies like Bitcoin and the protective cloak for criminals provided by technologies like Tor). The sophistication of the cybercrime industry has led to changes in the focus of criminals away from simply stealing credit cards to the perhaps more lucrative, large scale implementation of “ransomware”. This has ranged from encrypting the contents of a user’s computer and then demanding payment to unlock it, to the recent exploit of users caught up in the publishing of personal sexual information from the Ashley Madison dating site. Mobile phones have remained “relatively” cybercrime free What hasn’t come to pass (yet) is the pervasive hacking of mobile phones. Part of the reason for this has been Apple’s, and increasingly Google’s, approach of controlling the software that is allowed to be installed on the devices. The other reason is perhaps the fact that these devices are backed up more frequently and automatically, making recovery a much easier option. There is also potentially less of interest to cybercriminals on a mobile phone device as most of the actual important, and valuable, personal content is stored in the Cloud. The recent exception to the relative safety of mobile devices was the report that up to 225,000 Apple accounts had been compromised from Apple phones. The compromise in this case only affected mobile phones that had been “jailbroken”, a process that allows the user of the phone to circumvent Apple’s restrictions on what apps can run on the phones. Of course, what this has demonstrated is that the restrictions on what software can run on Apple and Android phones is actually a major security feature and so avoiding that increases the risk of being compromised significantly. The threat to the Internet of Things Along with mobile phones, smart devices that make up the Internet of Things have also been relatively free of large scale hacks. Researchers have demonstrated that it is possible to hack things like cars, including being able to apply the brakes of a car by sending the control system of the vehicle an SMS. In the case of this type of vulnerabilities, car manufacturers have moved to plug security holes quickly. The fact that criminals haven’t turned their attention to smart devices however is probably because of the lack of means of commercialising these types of compromises. People are still the problem Organisations like McAfee are fighting a largely losing battle as long as companies continue not to take security seriously. In fact, US companies are spending and doing less where security is concerned than in previous years. This has led organisations like the OECD to recommend national strategies around the development of cybersecurity insurance. A benefit of having this type of insurance would be the requirements that insurance companies would place on implementing a basic level of security best practice. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
Apple is about to open a new front in the ongoing war against online advertising. The new version of its mobile operating system, iOS 9, will support ad blocking by Safari, its mobile web browser. A study by Adobe and pro-advertising company PageFair finds that the popularity of ad blocking extensions in desktop web browsers is responsible for US$22 billion in lost revenue to the websites that host ads. They estimate that there are now 198 million users worldwide actively blocking ads. Amongst 400 users surveyed by the report’s authors, the main reasons cited for using ad blocking software were avoiding privacy abuse by targeted advertising as well as the number of ads encountered when browsing. A typical message from a website about the use of any ad blocking. TheGuardian.com screen grab The practice of trying to guilt users into switching off their ad blocking software when visiting sites doesn’t appear to be working and the display of messages to ad blocking users by web sites has diminished. Ad blocking apps that will be available for Safari on iOS 9 are already being made available to beta testers. One such app, Crystal, not only blocks ads but experiments by the developer has shown that using this ad blocking software speeds up web pages loading in the browser by four times. This also results in a significant reduction in data being used, which is significant on a mobile device using cellular data. Another ad blocking app Purify that is also in beta testing appears to also block ads on YouTube. The stand out, and that’s precisely why so many people block them. Pascale Kinchen Douglas/Flickr, CC BY-SA Ad blocking on mobile is not completely new Ad blocking has been available for some time on Android for users of the Firefox mobile browser and for Google Chrome. In the case of blocking ads by Google Chrome, an app needs to be installed which is not from the Google Play app store. Ad blocking has also been available on Apple devices but have worked by blocking access to certain domains that serve up the ads. AdBlock for example works by pretending to be a virtual private network (VPN) connection and filters out access to specific sites. This of course only works if the list of sites to block is up-to-date. It also doesn’t allow for “whitelists”, which are sites that are allowed through because they are deemed “acceptable”. However, the move by Apple is going to boost ad blocking on mobile dramatically because it is going to make the process of doing so that much easier. This has advertisers, and sites that make money from advertising, increasingly worried because it raises their costs in terms of creating ads that are less intrusive and deemed more acceptable (although this may still not convince the public to view them). Apple’s iOS 9 is due to be released later this year and will include content blocking. Apple For Apple, though, the move to allow ad blocking gives iPhone users a better browsing experience at no cost to Apple. Apple makes no money from online advertising through mobile browsing. And, of course, its own ads that are served up through apps are unaffected by ad blocking software. As a bonus to Apple, the company who is most affected by ads being blocked is Google, which derives 90% of its revenue from advertising. Apple is able to increase the level of privacy it offers its customers without directly getting involved itself and risking annoying companies that rely on revenues from advertising. The advertisers' dilemma Many ads can be deliberately deceptive. Create Meme It is hard to feel sorry for the advertisers and the sites that resort to displaying targeted invasive ads, such as those sold by Google, Facebook, Yahoo and others. These ads are designed to target individuals based on information gathered about them as they use the internet. So not only are they annoying, but they are exploiting people’s privacy. Adding insult to injury, the inclusion of ads slows down web page loads and potentially ends up costing end-users money by using their data allocation. The argument that content providers are only able to provide content based on the exploitation of their visitors is not a good one because it implies that those visitors signed up to an agreement to view ads in exchange for the content. Of course, users generally do no such thing. And given the explicit choice, might easily opt simply not to visit the site. Most users don’t necessarily mind being provided with information that allows them to make a reasoned choice about a product when they have decided to buy it. But advertising that tries to persuade a consumer to buy something they weren’t considering buying is a different matter. Once advertisers do more of the former and less of the latter, perhaps ad blocking will no longer be necessary. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
Ride sharing, both the legal and “illegal” type is growing rapidly around the world, with new Australian entrant RideBoom the latest to take on market leader Uber. Uber, which began in San Francisco in 2009, now operates in more than 50 countries with 300,000+ driver-partners (as they are known in “uberspeak”) in the US alone. In Australia it’s moving towards 20,000 driver-partners. The difference between Uber and many of its competitors though, is that most of Uber’s direct competitors operate within the legal confines of the countries they’re in. Uber on the other hand, is paying for its drivers to ignore local laws. Uber is in a global fight to win a regulatory environment favourable to its business model. This fight largely relies on ambiguity on how Uber should be defined as a company. Uber steadfastly denies any suggestion it is a service provider, insisting instead that it’s a “technology company” … “seamlessly connecting riders to drivers”. Uber maintains that its driver-partners are not employees. This has been, and continues to be, challenged in the courts and on the streets. Uber also wants to manipulate regulation that extends well beyond labour law, in order to boost its competitive advantage. The impact is being felt by non-Uber taxi drivers, prompting street protests everywhere from Paris to Mumbai; London to Mexico city. Taxi drivers came out in force to protest against Uber in London. Facundo Arrizabalga/EPA/AAP Fighting on multiple fronts In Australia, the current round of regulatory sparring has Uber contesting decisions across state and federal jurisdictions. In Queensland driver-partners have been hit with more than A$1.7 million in fines for providing “unlicensed” taxi services. Despite it being Uber practice around the world to pay drivers’ fines, the penalties in Queensland are being disputed and have not yet been paid. Similar situations can be seen around Australia and internationally. In California Uber collected a US$7.3 million fine for failure to provide information to the California Utilities Commission about the nature of the services provided by its drivers, including access for disabled clients. These are just a few of the multiple examples of Uber using its economic clout to promote regulatory recalcitrance, to reform rules with which it doesn’t agree. In some cases the stakes are even higher. Earlier this month in Hong Kong five drivers were arrested for illegally hiring out their vehicles. In France two Uber executives have also been arrested and in South Korea 30 people associated with Uber have been charged with running an illegal taxi firm. Multiple legal challenges and ongoing penalties are costly but do not seem to be a deterrent to Uber, a company valued at over US$50 billion and backed by the likes of investment bank Goldman Sachs. Uber is not pulling punches in its attempt to fashion the regulatory landscape, influence public opinion and policymakers. To that end, it has recruited lobbyists ranging from David Plouffe, one of the orchestrators of the Obama’s 2008 campaign, to Rachel Whetstone, former head of Google communications, and Jack Lanvin formerly chief of staff to the Illinois Governor. An Uber executive speaks to the media while Uber riders and driver-partners take part in a rally against proposed legislation limiting for-hire vehicles in New York. Eduardo Munoz/Reuters A high stakes game Uber seems to have made a strategic decision to take the legal hits associated with flouting local regulations, with the view that this is unlikely to land a knockout blow. But the business will need to be able to survive a succession of assaults from regulatory bodies and individuals, sometimes in the form of class actions. Using contractors while playing with regulatory frameworks and uncertain judicial responses is a high-risk strategy. Homejoy, a housekeeping platform, became a victim of risk adversity among investors in the face of similar lawsuits. Investors began to back off and after failing to raise enough capital to pursue its growth plans it shut down. Likewise, if Uber were to see a significant proportion of its partner-drivers reclassified as employees, or face a government crackdown on its aggressive tax minimisation practices, it could come under pressure at a time of escalating losses because of its determined expansion efforts. So, if we were to take a guess, what might be the likely outcome of this rumble in the jungle? Here are two possibilities. Scenario 1 Individual lawsuits, class actions and aggressive regulation generate increasing costs for Uber. In several countries, tribunals condemn Uber to pay heavy compensation and to reclassify its drivers as employees. Investors start to back out because of the financial and reputational risk. Competition increases, saturating the market and increasing the cost of drivers. The business model is no longer sustainable and Uber goes bust. Competitors take over, with a more traditional model of employment, which means a higher cost of operation but lower cost of litigation. This new generation of taxi drivers enjoys working conditions comparable to other workers in the economy. Regulators (and drivers) win in a knockout. Scenario 2 Uber adopts a risk-mitigation strategy, meeting existing regulation when necessary but maintaining its model in several countries and as a result its market leadership. It continues to co-invest in the development of a self-driving car. In 2020, the company operates the first self-driving car. The program to replace the millions of partner-drivers starts immediately in the US and is progressively deployed globally. Robots produce the cars. If Uber’s partner-drivers are lucky enough to find another job, they can always use the new self-drive Uber service to get to work. Uber declared winner on points. Sarah will be one hand for an Author Q&A between 3 and 4pm on Thursday, August 27. Post your questions in the comments section below. Sarah Kaine is Associate professor in Human Resource Management and Industrial Relations at University of Technology Sydney and Emmanuel Josserand is Professor of management at University of Technology Sydney This article was originally published on The Conversation. Read the original article.
Bitcoin has been declared the “end of money as we know it” and as a currency for our times; decentralised, and created specifically for seamless exchange on the Internet. That is, it would be, if everyone knew exactly what it was and was actually prepared to use it. The problem is, for most of the general public, Bitcoin still remains a mystery. In a recent survey of consumers carried out by analyst firm PWC, only 6% said that they were very familiar or extremely familiar with currencies like Bitcoin. 83% of those surveyed said they had little to no idea what Bitcoin was. Contradicting this is the fact that there has been a great deal of publicity around Bitcoin which is reflected by people searching for the term. Google Trends for example, shows that searches for Bitcoin exceeded those for two other payment systems, Apple Pay and Google Wallet. Bitcoin search frequency Google Trends Most of this attention has come from well publicised stories of Bitcoin and its association with drug crime on the Internet or hacks of Bitcoin exchanges like Mt Gox, where hundreds of millions dollars worth of the currency was stolen. So what is it exactly? Bitcoin is first and foremost a currency like any other. One Bitcoin can be exchanged for almost every other type of currency, on any number of “exchanges”. At the moment, 1 Bitcoin is worth about US $230. Once a Bitcoin is bought, it can be used to buy goods at a price in Bitcoins that is determined by the current exchange rate quoted on the various exchange markets. This is no different to using Australian dollars or Euros to buy things on the Internet priced in US dollars say. Using Bitcoin to buy things is very much the same as using electronic payments from a bank. The merchant will have an account number that is used when sending the required number of Bitcoin. Once sent, the merchant will confirm payment has been received and everything then proceeds just as if payment was made in any other currency. If it is that simple, why does everyone get confused? The trouble Bitcoin has had from the start is that it was an invention of computer science. Many of its attributes are based on the intricacies of the technology that makes it work and of interest only to specialists. For it to replace current payment systems, Bitcoin had to be marketed as having distinct advantages over using credit cards or services like PayPal. The difficulty with this is that the advantages are subjective. And merchants, and the public as a whole, have had a hard time in seeing them. The advantages of Bitcoin have certainly not been enough to warrant using it in preference to credit cards. But what about the “block chain”? As the marketing around Bitcoin hasn’t fared too well in targeting it as a replacement for credit cards and electronic banking, people have instead started talking about the really confusing part of Bitcoin called the “block chain)”. Needing to explain how the block chain works has also been a real weakness in the marketing of Bitcoin. Very few people care how banks agree that a transaction has taken place. They only care that they can look at their bank account and see a withdrawal of a specific amount at a specific time. What magic happens behind the scenes to make that work is really of no consequence. People have focused on the technology because somehow it was put forward as being more clever than how banks do things at present. But they argued this without actually acknowledging that the current system operates incredibly well and is extremely efficient. The current banking system has been baked into society for thousands of years and despite some of its failings (high charges and service quality) has succeeded because, it just works. It’s not that complicated Whilst the average member of the public is still puzzling over why they should care about Bitcoin, technologists and finance specialists will continue to argue about the relative merits, or otherwise, of the specific attributes of cryptocurrencies over our current payment systems. None of that is central however to understanding what Bitcoin is. All that is really important to know is that Bitcoin is simply another form of money. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
In virtually every science fiction novel or film, there is an evil corporation which dominates the world – from LexCorp in the Superman franchise to Weyland-Yutani in Alien. Their masterminds tend to hide their ambitions behind stretched smiles and a language of care. That is, until the story’s protagonist exposes their plans and saves the world by exposing the evil afoot. Compare this to the real world. We have corporations with huge influence which do bad things, we are well aware of it and yet we continue to let it happen. Why? The recent New York Times exposé of life working for Amazon used old-fashioned investigative journalism to reveal the harsh reality of working in the company’s head office in Seattle. It documents a culture of relentless criticism, with a reliance on continual measuring of performance and long working hours. Unsurprisingly, this results in high labour turnover, as those who refuse to become “Amabots” (a term used to describe someone who has become part of the system) get spat out like returned parcels. Nothing new to see here There has been predictable criticism of Amazon following these revelations – rightly so. But consider what we already know about the company. We have known for some time that it has a tax structure which ensures that it minimises its responsibilities in paying for the roads which allows it to transport its goods and the education that allows its employees to be able to read and write (Amazon’s British business paid just £4.2m in tax in 2014, despite selling goods worth £4.3 billion). We know, following the work done by Spencer Soper in the US and Carole Cadwalladr in the UK that the conditions in its warehouses are punishing. Long hours, low wages and continual monitoring by technology result in high labour turnover. Oh, and (surprise surprise) Amazon doesn’t like trade unions. Amazon factory workers in Germany striking last year for better pay and conditions. EPA/Roland Weihrauch What else do we already know? That Amazon is a company which seeks to dominate markets through cost efficiencies, putting competitors out of business, or ensuring that they have to do their business through Amazon. There are well-documented accounts of its attempts to ensure that publishers offer the same discounts that it does, or that all print on demand has to go through its own company. And, if that fails, it simply buys the competition with the huge piles of cash it has built from doing what it does, as it did with AbeBooks, LoveFilm, Goodreads, Internet Movie Database, The Book Depository, BookFinder, to name a few. And this isn’t even to mention its domination of the e-reader market through Kindle. Even if it doesn’t say so on the website, you might well be doing business through an Amazon subsidiary. If this isn’t a strategy for world domination, what is it? In 21 years, Amazon has grown to become a company with almost US$89 billion in turnover every year. To put this in context, that’s greater than the GDP of countries such as Cuba, Oman and Belarus. And it has made Jeff Bezos, its driven founder, a personal fortune of around US$47 billion, which is about the same as the GDP of Costa Rica or Slovenia. As one of his many plaudits, he was named “World’s Worst Boss” by the International Trade Union Confederation at their World Congress in May 2014. He also now owns the Washington Post. All this, and much much more, is known about Amazon, but it continues to grow, recently suggesting a move into delivery by drones and beginning a food delivery service in a few US cities. In his recent novel, The Circle, Dave Eggers describes a US internet company (a cipher for Google) that gradually moves towards world domination, using relentless monitoring of its employees and a continual rhetoric about exceeding customer needs. In the novel, when the customers or employees are confronted by criticisms of what the company does, they don’t see it, instead pointing to all the ways in which the company is making their lives easier. Criticism is seen as negative, practised by people who want to turn the clock back. Results driven: Amazon CEO Jeff Bezos. EPA/Michael Nelson Talk to most people about why Amazon is a problem and you will get similar responses. “But it makes things so easy.” “They are cheaper than anyone else.” “What’s wrong with efficiency?” The law of the jungle But this isn’t just a debate about Amazon, as if it is a bad company surrounded by lots of good ones. It raises much broader questions about what corporations do. Essentially, they are machines which are designed to grow, to externalise their costs and privatise their profits. The fact that this produces a management culture of extreme bullying, or anti-union practices in its workplaces, or anti-competitive strategies in its marketplaces shouldn’t really amaze us. It’s the law of the jungle, right? What should amaze us is the extent to which we know that this happens and yet – unlike the heroes in the sci-fi films – we continue to do nothing about it. Behind the reflective surfaces of its buildings and website, Amazon is selling us something else. It’s a vision of a different world of work and consumption. This is a privatised, measured and monetised world, in which every social value is for sale. You can even buy books which tell you what’s wrong with corporations through the website, because the content doesn’t really matter that much. All that matters is that the company makes money, dominates markets, keeps customers happy. That is what Amazon sells, and we continue to keep buying it. Martin Parker is Professor of Organisation and Culture at University of Leicester This article was originally published on The Conversation. Read the original article.
We are facing the “silver tsunami” of an ageing society that within a few years will see for the first time, more people over the age of 65 living on this planet than those under 5 years of age. Apart from the increased burden of chronic diseases that accompanies old age, the biggest impact of an increasingly ageing population will be felt in the numbers of people with dementia, and in particular Alzheimer’s Disease. In Europe, around 7% of the population over 65 have dementia. This rises dramatically with age and nearly 50% of women and 30% of men over the age of 90 will suffer from the condition. The Internet of Things For many of us, there is the desire to “age in place”, that is to remain in our homes and stay as active and independent for as long as possible. One possible way of achieving this is to use technological assistance, and in particular use connected smart devices that are collectively called the “Internet of Things” that are rapidly becoming a reality in the home. The Internet of Things can communicate with each other and with software running in the cloud. These devices can act as sensors, monitoring what is happening in the environment and, in particular, with elderly people themselves. They can also process information and take actions, such as controlling heating and air conditioning, locking doors and windows and reminding people to take medications or encourage them to be active, or simply go for a walk. Data collected through the Internet of Things in the home can be used to provide an overall assessment of “observations of daily living”. These observations form a pattern of everyday life from which any deviations can create triggers of that change to alert those living in the home, their family or their health carers. The challenges to letting the Internet of Things do the caring Despite all of the possibilities of these devices helping the elderly to stay independent and active, there are some significant obstacles that need to be overcome before their full potential becomes a reality. The first is acceptance by the elderly themselves. They may see remote monitoring devices as an intrusion on their privacy. They may also see any outward signs of using this technology as a public symbol of their age and frailty and so avoid their use for that reason. They may be concerned about not being able to use the technology properly, in particular triggering false alarms. Finally, the devices may not be considered affordable, or at least, too much of a luxury to spend money on. Dressing up the Internet of Things Some of these obstacles can be addressed by the design of the devices themselves. A US company, Live!y has created a smartwatch, not dissimilar to one from Apple or Samsung, that provides alerts and reminders and also can be used to summon help and communicate with a monitoring service. It also measures activity by counting steps, and usefully, tells the time. The watch acts in concert with a range of sensors that monitor medication use, access of the fridge and movement in various rooms. The watch can also detect falls and automatically call for help. By making the device seem like an everyday watch, it reduces at least some of the potential barriers to the elderly in its use. Sensing the state of their health Telehealth is another field of care of people in the home that utilises connected smart devices. Not only are we facing a rapidly increasing aged population, but a major proportion of that population have one or more chronic conditions. By using remote monitoring of weight, blood pressure, pulse and ECG, problems can be detected without a visit to a GP and more importantly, avoiding the hospital. The smart devices can sense, make decisions locally, and act on that information. Ultimately, if this is to be of any use, the directions originating from these devices need to be followed by those that the technologies are caring for. This is still the most challenging aspect of the entire process. Reminding someone that they have failed to take their medication may be of no use if that person has decided simply that they don’t want to take it. What the health profession can do about the elderly not taking medications as they are intended is a still a major problem and having reminders is not the entire solution. Because a solution does not work for everyone is not a reason for not adopting it for those that it will help. Before we see widespread adoption of the Internet of Things in the home however, we will need to see cheaper, more attractive, affordable, and useful devices that integrate with smartphones and computers and the apps that are running on them. The best chance for this happening are the initiatives from Apple and Google. Although Apple’s HomeKit and Google’s Brillo are aimed at everyone’s homes, their popularity may see the next generation of the elderly already prepared for their help in staying independent and active for longer. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
Google is seen as a world leader in innovation, an important backer of tech start-ups and a pioneer in all our futures. The corporation, which is financially the size of a mid-range country, just reorganised its structure so that it can continue to invest in experimental technologies – such as drones, driverless cars and unusual medical devices – without worrying shareholders. But many of Google’s current publicly reported innovations seem to be aimed at encouraging us to spend even more time connected to the internet. They are “technology-push” innovations, products that require the creation of a new market because there isn’t an obvious existing demand. Google Glass, the wearable optical computer that has now been discontinued is a good example. It didn’t appear to be rooted enough in a genuinely understood need. On the other side there are “need-pull” innovations that respond to existing needs and are the result of humble enquiry. Developments by Google in security devices, and modular smart phones all appear, on the surface to meet needs. But are they the genuine result of humble enquiry? The problem with Google’s moonshots is that they are fired at the Moon. And there’s no one on the Moon (not yet anyway). Many real needs are social, cultural and environmental, not rooted only in a hunger for the next wearable gizmo. Here are some real-need challenges that Google could put its mighty innovation machine to work tackling and improve the world in the process. Digital dealmaker Shutterstock 1. Making money more secure In a world of identity theft and online fraud, there is a huge need for more secure ways to transfer money and carry out transactions. Various ways to simply move money around, for example between smartphones, are emerging but other innovations could vastly improve security. “Smart contract” programs could ensure both parties stick to their side of a deal. For example, if you buy something online then a smart contract could take the money from your bank account only when it receives notification from the delivery company the product has arrived. Virtual or cryptocurrencies such as Bitcoin are starting to incorporate such technology but these systems still carry suspicion due to their use by black markets. Google has so far just hovered around the edges of Bitcoin but it has the opportunity to lead development and help make the technology mainstream. To do so, however, it may also have to fundamentally rethink its approach to privacy, which is an inherent part of Bitcoin but largely absent from the way Google currently operates thanks to its widespread data-gathering operation. Online jungle. Shutterstock 2. Creating a safer online world Google’s Project Vault will give us a digital safe in which to securely store our smartphone’s personal data and messages. Another useful gadget no doubt. But instead of developing security devices and making gadgets less stealable, I’d like to see Google support us in becoming more secure in ourselves. Existing innovations came about as a reaction to the insecurities of a hacked world. But there are opportunities not only for creating new digital safes and padlocks, alarms and security guards but also to begin an exploration of how to create preventive and naturally safe virtual and physical environments. These environments would be less about protection and defence and more about assurance and trust. The new windows Shutterstock 3. Making technology less intrusive Smartphones are constantly diverting our attention from the real world. Integrating technology more seamlessly into our lives could free us from their grip. Wearable technology and smart clothing could be one way of doing this, but better would be technologies that rely on and develop our tactile relationships with the world and each other. This may well involve finally dispensing with the “screen” and the gadget as the required focus of our attention. A big question is how can Google create technology that doesn’t require us to “look”, instead of having us squint at screens of different sizes, flashing us into trance states and harming our eyesight. Some experiments in less noticeable technology may involve an initial intrusion, for example, digital implants for communication, enhancing our senses or even curing physical conditions. But it is not guaranteed people will want to become cyborgs. A big opportunity is to create technologies that arise and pass away as needed, that are temporary, emergent and that enter our lives when we truly need them and leave when we don’t. Flying turbines Makani/Google 4. Changing the way we produce energy Energy is one of the biggest challenges for the whole planet. What if Google turned its weighty innovation might towards generating truly clean energy? Others in Silicon Valley have already started making inroads into the energy sector – see this gadget that allows consumers to access solar energy through smart tech, without buying expensive panels. Electric vehicle and battery technology such as Tesla is making also continues to grow and innovate. But country-sized corporations such as Google could do even more (perhaps they are behind closed doors). There are some crazy-sounding, alternative forms of energy emerging that might just work. Solar roads, sewage waste and even high altitude wind energy might benefit from some Google kickstart resource (the latter just has). Ok, Google! While you are up high in the sky, installing wifi balloons, why not harness some free energy for us all? Paul Levy is Senior Researcher in Innovation Management at University of Brighton This article was originally published on The Conversation. Read the original article.
“This program had absolutely nothing to do with race… but multi-variable equations.” That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively. Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have the highest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code. But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups. There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate. More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate. Is bias baked in? Justin Ruckman, CC BY Being quantitative doesn’t protect against bias Both Goldstein and Wernick claim their algorithms are fair by appealing to two things. First, the algorithms aren’t explicitly fed protected characteristics such as race or neighborhood as an attribute. Second, they say the algorithms aren’t biased because they’re “quantitative.” Their argument is an appeal to abstraction. Math isn’t human, and so the use of math can’t be immoral. Sadly, Goldstein and Wernick are repeating a common misconception about data mining, and mathematics in general, when it’s applied to social problems. The entire purpose of data mining is to discover hidden correlations. So if race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision. Here’s a simple example of the way algorithms can result in a biased outcome based on what it learns from the people who use it. Look at how how Google search suggests finishing a query that starts with the phrase “transgenders are”: Taken from Google.com on 2015-08-10. Autocomplete features are generally a tally. Count up all the searches you’ve seen and display the most common completions of a given partial query. While most algorithms might be neutral on the face, they’re designed to find trends in the data they’re fed. Carelessly trusting an algorithm allows dominant trends to cause harmful discrimination or at least have distasteful results. Beyond biased data, such as Google autocompletes, there are other pitfalls, too. Moritz Hardt, a researcher at Google, describes what he calls the sample size disparity. The idea is as follows. If you want to predict, say, whether an individual will click on an ad, most algorithms optimize to reduce error based on the previous activity of users. But if a small fraction of users consists of a racial minority that tends to behave in a different way from the majority, the algorithm may decide it’s better to be wrong for all the minority users and lump them in the “error” category in order to be more accurate on the majority. So an algorithm with 85% accuracy on US participants could err on the entire black sub-population and still seem very good. Hardt continues to say it’s hard to determine why data points are erroneously classified. Algorithms rarely come equipped with an explanation for why they behave the way they do, and the easy (and dangerous) course of action is not to ask questions. Those smiles might not be so broad if they realized they’d be treated differently by the algorithm. Men image via www.shutterstock.com Extent of the problem While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor. Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions. Legal hurdles Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring. The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance. According to Barocas and Selbst, if a company algorithmically decides whom to hire, and that algorithm is blatantly racist but even mildly successful at predicting job performance, this would count as business necessity – and not as illegal discrimination. In other words, the law seems to support using biased algorithms. What is fairness? Maybe an even deeper problem is that nobody has agreed on what it means for an algorithm to be fair in the first place. Algorithms are mathematical objects, and mathematics is far more precise than law. We can’t hope to design fair algorithms without the ability to precisely demonstrate fairness mathematically. A good mathematical definition of fairness will model biased decision-making in any setting and for any subgroup, not just hiring bias or gender bias. And fairness seems to have two conflicting aspects when applied to a population versus an individual. For example, say there’s a pool of applicants to fill 10 jobs, and an algorithm decides to hire candidates completely at random. From a population-wide perspective, this is as fair as possible: all races, genders and orientations are equally likely to be selected. But from an individual level, it’s as unfair as possible, because an extremely talented individual is unlikely to be chosen despite their qualifications. On the other hand, hiring based only on qualifications reinforces hiring gaps. Nobody knows if these two concepts are inherently at odds, or whether there is a way to define fairness that reasonably captures both. Cynthia Dwork, a Distinguished Scientist at Microsoft Research, and her colleagues have been studying the relationship between the two, but even Dwork admits they have just scratched the surface. To get rid of bias, we need to redesign algorithms with a fresh perspective. Thomas Mukoya/Reuters Get companies and researchers on the same page There are immense gaps on all sides of the algorithmic fairness issue. When a panel of experts at this year’s Workshop on Fairness, Accountability, and Transparency in Machine Learning was asked what the low-hanging fruit was, they struggled to find an answer. My opinion is that if we want the greatest progress for the least amount of work, then businesses should start sharing their data with researchers. Even with proposed “fair” algorithms starting to appear in the literature, without well-understood benchmarks we can’t hope to evaluate them fairly. Jeremy Kun is PhD Student in Mathematics at University of Illinois at Chicago. This article was originally published on The Conversation. Read the original article.
Three lawyers and a police commissioner from the US, France, England and Spain have used an opinion piece in the NY Times to hold Apple and Google to account for providing their customers with unbreakable encryption of their mobile devices. Cyrus R. Vance Jr. (Manhattan District Attorney), François Molins (Paris Chief Prosecutor), Adrian Leppard (Commissioner of the City of London Police) and Javier Zaragoza (Chief Prosecutor of the High Court of Spain) have argued that the inability of law enforcement to access a user’s phone has, in their view, been directly responsible for disrupting investigations. They cite that in the 9 months to June of this year, 74 iPhones running iOS 8 could not be accessed by investigators in Manhattan and that this affected investigations that included “the attempted murder of three individuals, the repeated sexual abuse of a child, a continuing sex trafficking ring and numerous assaults and robberies”. They claim that encrypting the phones provides no security for the customer against the type of wide-net surveillance carried out by the NSA, nor would it prevent “institutional data breaches or malware”. How the authors of this piece could be so confident that the investigations they give as examples would have had different outcomes with access to the phones is not clear. What is abundantly clear however, is that the encryption of these devices did exactly what it was supposed to do, it stopped someone other than the owner of the phone from gaining access to the contents. The authors didn’t mention the fact that this alone prevents criminals from accessing the content of the phone if it is stolen or lost. Given that 3 million phones were stolen in the US in 2013 and that the majority of people would go to extreme lengths to recover the data on their lost devices. The opinion piece goes on to declare the benefits “marginal” and the consequences to law enforcement is the inability to guarantee the “safety of our communities”. The lobbying by senior law enforcement officials in the press against the use of encryption is becoming a common occurrence. The NY Times piece follows FBI Director James Comey’s declaration that unstoppable terrorism was the consequence of allowing encryption to be used by consumers. All of the rhetoric used in these arguments make so many leaps of faith as to undermine any credibility in their claims. The suggestion for example that criminals/terrorists would not seek to cover their tracks in their phone use, or execute care over what incriminating evidence they kept on their phones is not realistic. The further suggestion that any evidence that the phone itself revealed would be critical in an investigation is likewise, suspect. Remember that police still have access to phone records, and also to all data communication carried out through the device. Both France and the UK have enacted extensive data retention laws. The police can also continue to get access to cloud services used by the phone’s owner, also social media and location data, and a host of other information leaked through the use of the device that continues to be unprotected in the same way as the device itself. Perhaps this is really what is being fought over? The obvious next step in consumer privacy which is putting the encryption of all cloud-based resources in the hands of the consumer so that companies like Google, Apple and Microsoft are unable to hand over information to Government security and law enforcement agencies. The Internet Architecture Board, a committee of industry, academic and government specialists concerned with the functioning of the Internet has declared that they “strongly encourage developers to include encryption in their implementations, and to make them encrypted by default.” Ultimately, it will be society as a whole that determines the appropriate balance of privacy against the wishes of governments and agencies to surveil-at-will. The rights to privacy are stated in the Universal Declaration of Human Rights and it will take more than the alleged inconvenience it causes legal officials to give up those rights. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
In today’s mobile media environment, an incredible amount of information is available to every one of us, every minute of every day. With our cell phones close by, we can easily search for answers to trivia questions, word definitions or find the perfect recipe for the confetti eggplant bought at the farmers’ market. When traveling, we have instant access to the conversion rate between the euro and the dollar and can map directions to any location. And then there is all the personalized information posted by our Facebook friends. So, how do we keep up with and understand the wide array of information? How do we integrate this into our lives as we participate in a connected world? And how do we make meaningful additions to these spaces as originators of information in the online venues that matter to us? As researchers of library and information science, we use the term metaliteracy as a way to look at literacy in the social media age. Previously, the usage of the term metaliteracy was mostly in connection with literacy studies. We expand the idea further in our book: Metaliteracy: Reinventing Information Literacy to Empower Learners. We use it as a way to recast information literacy for reflective learning with social media and emerging technologies. So, what exactly is metaliteracy, as we define it? Metaliterate mindset To understand it, let’s consider some common web-based situations that we encounter daily. When browsing the web or scrolling Facebook, you may have noticed the ads that appear often align very closely to searches you’ve performed previously. For instance, after searching for consumer products such as a new sofa, you probably encountered the same exact products and stores you originally sought out. At times, this might be just what you want. But after a while, it might start to feel a bit intrusive. Yes, you can adjust your ad settings to increase the chances that relevant advertisements appear only when you are on Google sites such as YouTube. But did you also know that you can opt out of this feature? Here is where metaliteracy comes into play. A metaliterate learner would always dig deeper into the search process, ask good questions about sources of information, consider privacy and ethical issues, and reflect on the overall experience, while adapting to new technologies and platforms. Filtering in a connected world There is more going on here than we might think we know. For instance, did you know that often the information we see online is being filtered for us, by someone else? Google has been personalizing your search results since 2005 if you were signed in and had your web history enabled. If you were being cautious and didn’t sign in, starting in 2009 they began using 180 days of your previous search activity to accomplish the same thing. Google might call it personalizing, but others see it as constricting. Yes, but how many of us use other search engines? Patrick Barry, CC BY-SA Information filtering, or “filter bubbles,” as author and cofounder of Upworthy Eli Pariser calls it in his TED Talk, can circumscribe the information we see when we conduct those searches. Filtering results in isolated information ecosystems of our own making. What about other search engines? If we are willing to break away from the convenience of Google, we could use other search services. DuckDuckGo and Startpage are just two of several search engines that provide more privacy than some of the big names. For, instance, DuckDuckGo does not engage in “search leakage,” as that firm calls it. It notes that other search engines save not only individual searches, but also your search history: Also, note that with this information your searches can be tied together. This means someone can see everything you’ve been searching, not just one isolated search. You can usually find out a lot about a person from their search history. But worse, search engines may release searches without adequately anonymizing the information, or that information may be hacked. Startpage allows you to funnel your search in a way that obtains Google results without your personally identifiable information traveling along with the query. But, how many of us opt to use a search engine other than Google? Google is still the dominant search engine worldwide. Filtering information So, then, are we weaving our own webs without carefully thinking about the many implications of doing so? Look at how we selectively create and share our experiences on the fly, editing and filtering digital information along the way, and making choices about permissions to view and to share. For instance, imagine the millions of selfies that reflect our individual personas while being shared within a larger social mosaic of interconnected audiences. Sometimes we may not even be aware of who can access our content or how it is distributed beyond our immediate circle of friends. Consider our focused concentration on texting while being in large crowds and ignoring the chance encounters with others or missing the random scenery of everyday experience. Information-filtering is ongoing in all these contexts and is both internally and externally constructed. Empowered contributors What does metaliteracy do? Metaliteracy prepares us to ask critical questions about our searches and the technologies we use to seek answers and to communicate with others. We do not just accept the authority of information because it comes from an established news organization, a celebrity, a friend, or a friend of a friend. Metaliteracy encourages reflection on the circumstances of the information produced. It prepares us to ask whether or not the materials came from an individual or an organization and to determine the reason for posting or publishing it. As part of this process, the metaliterate learner will seek to verify the source and ask questions about how the information is presented and in what format. Metaliterate individuals gain insights about open environments and how to share their knowledge in these spaces. For instance, they are well aware of the importance of Creative Commons licenses for determining what information can be reused freely, and for making such content openly available for others' purposes, or for producing their own content. They also understand the importance of peer review and peer communities for generating and editing content for such sites as Wikipedia, or open textbooks, and other forms of Open Educational Resources (OERs). The truth is that we can all be metaliterate learners – meditative and empowered, asking perceptive questions, thinking about what and how we learn, while sharing our content and insights as we make contributions to society. Trudi Jacobson is Distinguished Librarian at University at Albany, State University of New York.Thomas P Mackey is Vice Provost for Academic Programs at SUNY Empire State College. This article was originally published on The Conversation. Read the original article.
Twitter has been in the news recently, for all the wrong reasons. Business media report that Twitter shareholders are disappointed with the company’s latest results; this follows recent turmoil in the company’s leadership which saw the departure of controversial CEO Dick Costolo and the (temporary) return of co-founder Jack Dorsey until a permanent replacement is found. All this has served to feed rumours that Google, having recently called time on its own underperforming social network Google+, might be interested in acquiring Twitter. From one perspective, this would clearly make sense – social media are now a key driver of Web traffic and a potentially important advertising market, and Google will not want to remain disconnected from this space for long. On the other hand, though, given its chequered history with the now barely remembered Google Buzz as well as major effort Google+, Twitter users (and the third-party companies that serve this userbase) may well be concerned about what a Google acquisition of the platform may mean for them. I had the opportunity to explore these questions in some detail in an extended interview with ABC Radio’s Tim Cox last week. In a wide-ranging discussion, we reviewed the issues troubling Google+ and Twitter, and the difficulties facing any player seeking to establish a new social media platform alongside global market leader Facebook. Here’s the audio: Let us take this conversation further: what if Google did buy Twitter? From my point of view, this could turn out a positive move, if Google treats the platform appropriately (as it did, arguably, with past acquisitions such as Blogger, YouTube, and Google Maps). It’s become very obvious over the past months that Twitter’s stock market listing has been a curse at least as much as a blessing: while it’s raised substantial new capital, of course, it’s also exposed the company to the expectations of shareholders who seem to fundamentally misunderstand what Twitter is or can be. As a platform, Twitter is not and will never be a competitor to Facebook, whatever its shareholders seem to think. Both might be classed under the overall rubric of “social media”, but any direct comparisons constitute a category error: the appeal of a strong-ties, small-world networks platform like Facebook, where we tend to network predominantly with family and friends, is necessarily fundamentally different from that of a weak-ties, large-world space like Twitter, where we can follow – and attempt to strike up conversations with – celebrities, politicians, and other users outside of our immediate networks. That’s a very different kind of social network, with its own unique uses, and it is futile to hope that Twitter will eventually attract the same number of users, or the same user activity patterns, as Facebook. Worse still, to try to reshape Twitter in Facebook’s image by force will almost inevitably kill off the platform. If Google understands this, and treats Twitter appropriately (which probably includes accepting it as a loss leader for the time being), this could well turn the platform’s fortunes around. Twitter’s recognised strengths are as a flat, public, and open network that excels especially in live contexts; Twitter is the place where most recent breaking news stories first broke, and a space where users gather as a temporary public and community to collectively participate in shared experiences from the World Cup to Eurovision. Beyond any marketing hype, it genuinely serves as the pulse of the planet in a great many contexts. This live insight into what news stories and other information are currently hot (and thus should be served as search results, too) may well be valuable enough for Google to fork out a few billion, even if there still doesn’t seem to be a workable model for generating significant direct advertising revenue from the platform. But whoever takes on Twitter, one of the first things the new CEO will need to do is to fundamentally rebuild Twitter’s relationship with those on whom, historically, its successes have most depended: the flotilla of third-party developers and researchers that surrounds the Twitter mothership. As Jean Burgess and I have documented in our contribution to the forthcoming collection Digital Methods for Social Science, those developers – and the early adopters and lead users whom they have served – have made the platform what it is: they developed powerful Twitter clients and tools, and laid the groundwork for the social media analytics approaches that have become crucial for making sense of trends on Twitter and elsewhere. Sadly, though, especially under Dick Costolo Twitter’s relationship with these crucial allies in the promotion of Twitter as a platform and a community soured significantly: abrupt and radical changes to the terms of service of the Twitter API (which govern what data companies and their tools could gain access to) in pursuit of more revenue undermined this crucial third-party ecosystem and stymied further innovation. And if anything, the handful of exceptions from this new, more restrictive régime – such as the Twitter Data Grants for researchers, which supported a total of only six out of 1,300 proposed projects – caused further offence rather than restoring goodwill. Absent any major new investments, a Twitter relying mainly on the support of its shareholders seems unlikely to change tack in this way – it will continue to chase revenue by attempting to commercialise its data, and in the process also continue to alienate the crucial third-party developer community. This is a path of diminishing returns: the data are valuable only as long as there are popular and meaningful applications for Twitter as a platform, but those applications have historically been created by the third-party developers and the power users they support. Freed from the short-term, unrealistic demands of the stock market through an acquisition by Google (or another cashed-up investor), on the other hand, Twitter could dial back its desperate efforts to commercialise its APIs and the data they provide, and return to its original, more permissive data access régime in order to nurture and support new efforts at research and development. Such a shift in policy could well be the shot in the arm Twitter needs to ensure its longer-term survival – but it depends on the intervention of a new benefactor. Is Google ready to play – or is it still too disheartened from its past attempts to enter the social media market? Axel Bruns is Professor, Creative Industries at Queensland University of Technology. This article was originally published on The Conversation. Read the original article.
We tend to think about our healthcare sector as a leader in the development and use of advanced medical technology and biotechnology, such as expensive imaging machines or devices that we implant into patients. But in many aspects of conducting the business of healthcare, our healthcare system is still in a pre-digital era. For example, healthcare may be the last sector where significant amounts of communication are still done via fax and regular post. This is not to say that significant changes are not happening. Radiology is increasingly using digital technology but the interpretation of these images is still manual. Electronic health and medical records are also being introduced widely but there is little communication between collectors. The digital revolution in healthcare that is currently slowly unfolding will use data and technology to improve the healthcare of patients. It will also increase safety and quality, and improve efficiency in the health care system. The eyes have it… remotely One example of how technology can be used to deliver better healthcare is a recent trial by CSIRO and our partners that provided screening for eye diseases among people in remote parts of Western Australia and Queensland. Using the nbn’s satellite broadband service, we screened more than 1,200 people in their communities for diseases such as diabetic retinopathy. This disease often causes irreversible blindness, and it affects the Indigenous population at nearly four times the rate of the non-Indigenous population. Captured: a typical high-resolution image of a patient’s retina. CSIRO, CC BY-NC Local health workers were trained at capturing high-resolution images of a patient’s retina with a low-cost retinal camera. These images were stored then uploaded over the NBN satellite to ophthalmologists in Brisbane and Perth. The screening program identified 68 patients who were at high risk of going blind, including those with macular edema. In the most case these patients received treatment locally. However, some patients needed transfer to major hospitals for immediate treatment. Once patients were identified as being at risk of significant eye disease, they were provided with care plans that involved local follow up consultations and regular screening the tele-eye care screening program. For diabetic patients this included advice on controlling their diabetes, which improves their overall health as well as reducing the risk of blindness. We have the technology Overall, the trial showed the effectiveness of providing a “store and forward” tele-ophthalmology service using satellite broadband. These types of services have previously been held back by unreliable broadband services and the lack of digital systems in our health services to interact with them. Reliable broadband connectivity together with increased use of digital systems by health services means that these methods of health service delivery can now become the normal way healthcare is provided. But for these tele-enabled models of care to really take off, patient data must be shared between providers. At the moment, different healthcare providers – GPs, specialist doctors and emergency departments at local hospitals – all separately collect information about the same patients. This means that the services that a patient receives are generally uncoordinated. With the increase in chronic diseases, such as diabetes and eye-disease, coordinated care will lead to better health outcomes. For providers to share data requires their computer systems to be able to send and receive data, and make sure that the data is added to the correct patient’s electronic record. This is where the type of algorithms that power search engines such as Google – semantic web and information retrieval technologies – can be tailored for healthcare systems. Shared properly, the data can be used to make sure that patients receive appropriate services. Sharing this data will also mean that there is a bigger volume of data about a patient with each healthcare provider. This will require computers to do more to analyse the data and alert patients, clinicians and health care providers when there should be follow up action. More IT jobs needed in healthcare The increase in the use of digital technologies will not only boost healthcare. This is a sector where there will be a significant boost in the number of IT professionals, including data scientists, needed to work. Big data analytics will be required to analyse the large volumes of different types of data that are being collected at an increasing rate. But it is not just about applying these new technologies in healthcare. There is also a need to work with clinicians and health service executives to understand what data is – or could be – collected. This may lead to a new way of providing clinical care, a new health service, or even make existing processes more efficient. For data analysts and IT professionals working in healthcare, the opportunities to make a difference to patients are almost boundless. David Hansen is CEO, Australian e-Health Research Centre at CSIRO. This article was originally published on The Conversation. Read the original article.
From autonomous vehicles and the rapid rise of Uber to the global diffusion of bike-sharing schemes, transport is changing. Developments in information technology, transport policy and behaviour by urban populations may well be causing a wholesale shift away from conventional cars to collective, automated and low-carbon transport. Yet there are still many uncertainties in technology development, finance and trends in user practices and expectations about the scale of these changes may well be inflated. Perhaps the most significant development is “peak car” use – the stalled growth or modest decline in car ownership and use since around 1990 across the developed world. As well as economic reasons and the returning popularity of city living, this seems to be driven in part by a move away from pro-car planning. Metropolitan governments in particular are increasingly reallocating road space away from private cars and concentrating office and housing developments around public transport stations. They are even supporting a wide range of innovations in local transport, including self-driving pod cars called up by a smartphone app. All these initiatives aim to reinvent “old” transport systems – metro, tram and cycling – as efficient, fashionable and healthy, enabling both economic growth and a better quality of life. Public transport problems However, public transport still faces significant challenges. Research consistently shows that satisfaction with tripmaking is lower on bus and rail than on other forms of transport. The industrial-era logic of only offering services at particular stops or stations at specific times sits uncomfortably with the changing rhythms of work, shopping, care-giving and leisure in post-industrial societies. These service provision problems are particularly acute in suburbs (and of course rural areas) where the flexibility afforded by private cars continues to be the norm. Yet, even in the densest parts of cities, public transport only meets everyone’s needs when there are more flexible options as well. This is why greater public transport use is linked to, and to some extent triggers, increased use of cycling and, more recently, smartphone-enabled taxi services. These forms of transport are available (almost) everywhere at all times – and therefore better compatible with the individualised lifestyles of people accustomed to the convenience that private car use epitomises. But even bike and car-sharing schemes with fixed docking stations and parking bays suffer from some of the same limitations as public transport does. The future may well be brighter for smartphone-dependent “free-floating” schemes, whereby cars can be picked up and left at any location within a designated zone that stretches across a city or parts thereof. Pod to the future. Department for Transport/flickr, CC BY-NC-ND There are also big obstacles to a public transport revolution in the form of entrenched government patterns and vested interests. Past planning decisions, in particular, constrain current and future changes in transport systems. This is because the construction of road infrastructure, sprawling suburbs, car-dependent retail/leisure complexes and mono-functional business areas since the 1950s is largely irreversible, at least for the coming decades. Industry fightback The car industry remains powerful and does not sit still. In many countries, car manufacturing continues to be important to the national economy and can therefore count on considerable support from local, national and supranational (EU) governments. This is exemplified by the UK government’s Office for Low Emission Vehicles which was set up to stimulate the uptake of electric and other low-carbon vehicles. Car manufacturers may now be experiencing competition from powerful technology companies such as Google and Apple, but this is catalysing their own development of innovations. The Google driverless car may be the most famous but the first such vehicles from conventional manufacturers are expected to hit the market by 2017-2018. Many hurdles still need to be overcome. The technology needs substantial refining, major issues around insurance and liability need to be resolved, it is not clear how adaptations to road infrastructure will be financed, and public opinion is divided. Based on experiences with electric and fuel cell cars in recent decades, current expectations about commercialisation and consumer uptake are (vastly) over-optimistic. Unexpected and unforeseeable events may radically reshape current development trajectories, but there are good reasons to expect that transport systems in 30 years will not be drastically different from today. Tim Schwanen is Associate Professor in Transport Studies and Director of the Transport Studies Unit (from September 2015) at University of Oxford. This article was originally published on The Conversation. Read the original article.
UK communications regulator Ofcom has released a report that gives a fascinating snapshot of digital society in the UK. It highlights the dominance of mobile, and the centrality of social media in social interactions and relationships. The change has been brought about, not by improvements in fixed broadband but by the availability of larger, more capable phones and faster 4G mobile networks. Phones and 4G are in turn facilitating communication through a variety of channels, especially social media. Bigger phones allow people to do more In terms of the importance of mobile, 33% of UK residents now view their smartphone as the most important device to connect to the Internet compared to 30% who chose their laptop. This switch in preference has come about because of the general increase in the size of phones. The release of Apple’s iPhone 6 and 6S in response to the popularity of Android phones of the same size has helped cement the larger form-factor as a standard. People can now comfortably carry out many of the tasks that would have normally been reserved for a laptop, PC or tablet. 4G is the other key enabler of move to mobile The second reason has been the increase in speed of the average smartphone connection. 45% of UK smartphone users have access to 4G networks, a 28% increase on the previous year. The faster speeds have not only resulted in greater use of mobile data generally but has shifted what users will do with their phones. 4G users are more likely to use their phones to access audio-visual content(57% 4G users compared to 40% non-4G users). They are also more likely to use their phones to make online purchases and use online banking. Faster fixed broadband plays a smaller part What is interesting is that the changes brought about by the increased use of smartphones have had more impact than the increase in speeds of fixed broadband services to the home. 83% of UK premises are able to receive broadband speeds of 30 Mbit/s or higher. 30% of homes have connected to broadband at these higher speeds. Mobile 4G users were less likely to use their home wireless than those not on 4G showing a general trend to “cutting the cord” even in the area of Internet access. Changing communication and social media UK Internet users believe that technology has changed the way that they communicate and that these new forms of communication have made life easier. Traditional forms of digital communication such as email and text messaging are still dominant but 62% of online adults use social media and 57% instant messaging to communicate regularly with family and friends. Technologies such as Skype, Facetime and Google Hangout are also used by 34% of adults. In terms of social media use, Facebook is by far the dominant platform with 72% of adults having a social media profile and 97% of those having one on Facebook. Although teenagers are likely to use other social media platforms, 48% of social media users use Facebook exclusively. People are also spending greater amounts of time on Facebook than any other service. In March of 2015, users spent 51 billion minutes on Facebook’s website and apps compared to 34 billion on Google’s. YouTube was also watched by more people via mobile devices than on desk/laptops. Change, but not in productivity Although digital technologies have brought about a major change in society in the UK, this hasn’t been reflected in any changes in productivity in the UK economy. The UK continues to rate behind France, Germany, US and even Italy in terms of worker productivity. The results of surveys such as these enable several important points to be underscored. The first is that investment in fixed broadband infrastructure is not necessarily as important as investment in universal high speed wireless access in terms of its impact on society. Second, although we may see radical changes in social norms through the use of digital technologies, it won’t show up in increased productivity. The last point has to be qualified however. It may well be that existing businesses do not show any improvements in productivity but new forms of industry and business are enabled by a mobile economy which may well bring about radical changes in productivity. Uber, Airbnb, and other industries as part of the so-called “gig economy” threatens to disrupt industry and this will only be possible through the use of mobile phones and high speed wireless. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Telstra last week announced that it will launch Telstra TV in September. This could be the “all-in-one” video streaming service many Australians have craved, bundling together the three leading video-on-demand (VoD) services: Netflix, Stan and Presto. However, it also puts Telstra in an odd position, straddling multiple services, some of which compete with other products Tesltra is intimately involved in, such as Foxtel. It also underscores how the introduction of VoD services have disrupted and fragmented the television market in Australia. Even ISPs are now starting to play a role in delivering content to the biggest screen in the house. So what will the future of television look like in Australia? All-in-one Telstra TV will run on the Roku 2 device, which is comparable in function to Google Chromecast and Apple TV. The Roku 2 already runs hundreds of international apps, although it is unclear how many will be available when the device launches in Australia. An attractive feature of the new service could be that it will launch with access to Netflix and Presto, with Stan to follow. Telstra has said it is trying to negotiate a bundled price for all three services for less than A$30 a month. The announcement by Telstra raises questions about where the company sees its future and involvement, not only as an ISP but also as a media distributor. The Telstra TV device, based on the Roku 2, and remote. Telstra Telstra is already heavily involved in a number of areas of the Australian media, across broadcast and streaming. The company has an even share of 50% with News Corp Australia in Foxtel, which has Foxtel Play, arguably a competitor to Netflix, Stan and Presto. Foxtel itself is involved in a joint venture in Presto with Seven West Media. Foxtel also recently purchased a 15% share in the Ten Network, and last year completed the joint production of Goggle Box with the network, which was broadcast on pay television with a one day delay to free-to-air (FTA). In addition to Telstra’s various media company associations, it also has services that could be argued to compete with its new Telstra TV service. Telstra’s current T-Box could be seen as offering many similar digital services to Telstra TV. Despite this, there is no discussion that it will be discounted when the new service is launched. One key reason for this could be due to the fact that T-Box provides access and recording of FTA broadcast television, similar to its competitor Fetch TV, a service linked with Optus and iiNet. The Roku 2 does not allow FTA viewing or recording, therefore solely relying on internet and app entertainment. As a Telstra spokesperson said: “We will not be positioning this as a substitution for Foxtel at all. This is very much for non-pay TV customers.” Despite this claim, Damien Tampling from Deloitte said there is “potential the move would cannibalise Foxtel customers”. The Roku service does provide access to HBO Go, which provided immediate access to the most recent season of Game of Thrones, a series high on the piracy list for Australians. If HBO Go was to become available in Australia this would impact Foxtel, which relies heavily on exclusive programs such as Game of Thrones. Shake up This move by Telstra also raises questions for the future of television and VoD in Australia, such as: who will be the big media players in the future? Will the future of these services rest with the current traditional broadcasters, free-to-air and pay-TV? Or will ISPs play a larger role in this space in the future? Telstra has the largest number of individual customers subscribed to Netflix of all Australian ISPs, but the lowest percentage: only 5.2%. This is far less than the leader, iiNet, at 16.8%. Stan and Presto are yet to have a strong uptake since their launch at the beginning of this year. Recent figures show Netflix had three times more users than Presto, Stan, Quickflix and Foxtel Play combined. Telstra’s spokesperson said live sports will be the reasons for customers to stay with Foxtel and free-to-air TV. But even sports broadcasting is changing, with new players such as YouTube, along with sporting organisations becoming their own broadcasters. What’s next on TV? There is no doubt that 2015 will be an interesting year for TV. We will see how VoD might force old players to adapt to the changing media landscape. Seven and Nine are already involved with VoD services, with only Ten yet to make a move in this space, although the recent purchase by Foxtel could change the network’s direction. There will also be interest around any new players that may appear as the race continues to establish some structured approach to a changing media distribution environment. In this we might see ISPs take a greater role as media outlets. Marc C-Scott is Lecturer in Digital Media at Victoria University. This article was originally published on The Conversation. Read the original article.
Windows 10, it seems, is proving a hit with both the public and the technology press after its release last week. After two days, it had been installed on 67 million PCs. Of course, sceptics may argue that this may have simply been a reflection of how much people disliked Windows 8 and the fact that that the upgrade was free. For others, though, it is the very fact that the upgrade is free that has them concerned that Microsoft has adopted a new, “freemium” model for making money from its operating system. They argue that, while Apple can make its upgrades free because it makes its money from the hardware it sells, Microsoft will have to find some way to make money from doing the same with its software. Given that there are only a few ways of doing this, it seems that Microsoft has taken a shotgun approach and adopted them all. The question is whether it’s really ‘free’. Microsoft Free upgrade Chris Capossela, Microsoft’s Chief Marketing Officer, has declared that Microsoft’s strategy is to “acquire, engage, enlist and monetise”. In other words, get people using the platform and then sell them other things like apps from the Microsoft App Store. The trouble is, that isn’t the only strategy that Microsoft is taking. Microsoft is employing a unique “advertising ID” that is assigned to a user when Windows 10 is installed. This is used to target personalised ads at the user. These ads will show up whilst using the web, and even in games that have been downloaded from the Microsoft App Store. In fact, the game where this grabbed most attention was Microsoft’s Solitaire, where users are shown video ads unless they are prepared to pay a US$9.99 a year subscription fee. The advertising ID, along with a range of information about the user, can be used to target ads. The information that Microsoft will use includes: […] current location, search query, or the content you are viewing. […] likely interests or other information that we learn about you over time using demographic data, search queries, interests and favorites, usage data, and location data. It wasn’t long ago that Microsoft was attacking Google for similar features it now includes in Windows 10. Internet Archicve It was not that long ago that Microsoft attacked Google for doing exactly this to its customers. What Microsoft is prepared to share, though, doesn’t stop at the data it uses for advertising. Although it maintains that it won’t use personal communications, emails, photos, videos and files for advertising, it can and will share this information with third parties for a range of other reasons. The most explicit of these reasons is sharing data in order to “comply with applicable law or respond to valid legal process, including from law enforcement or other government agencies”. In other words, if a government or security agency asks for it, Microsoft will hand it over. Meaningful transparency In June, Horacio Gutiérrez, Deputy General Counsel & Corporate Vice President of Legal and Corporate Affairs at Microsoft, made a commitment to “providing a singular, straightforward resource for understanding Microsoft’s commitments for protecting individual privacy with these services”. On the Microsoft blog, he stated: In a world of more personalized computing, customers need meaningful transparency and privacy protections. And those aren’t possible unless we get the basics right. For consumer services, that starts with clear terms and policies that both respect individual privacy and don’t require a law degree to read. This sits in contrast to Microsoft’s privacy statement, which is a 38 page, 17,000 word document. This suggests that Microsoft really didn’t want to make the basic issues of its implementation absolutely clear to users. Likewise, the settings that allow a user to control all aspects of privacy in Windows 10 itself are spread over 13 separate screens. Also buried in the privacy statement is the types of data Cortana – Microsoft’s answer to Apple’s Siri or Google Now – uses. This includes: […] device location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on your device. Cortana also learns about you by collecting data about how you use your device and other Microsoft services, such as your music, alarm settings, whether the lock screen is on, what you view and purchase, your browse and Bing search history, and more. Note that the “and more” statement basically covers everything that you do on a device. Nothing, in principle, is excluded. Privacy by default It is very difficult to trust any company that does not take a “security and privacy by default” approach to its products, and then makes it deliberately difficult to actually change settings in order to implement a user’s preferences for privacy settings. This has manifested itself in another Windows 10 feature called WiFi Sense that has had even experts confused about the default settings and its potential to be a security hole. WiFi Sense allows a Windows 10 user to share access to their WiFi with their friends and contacts on Facebook, Skype and Outlook. The confusion has arisen because some of the settings are on by default, even though a user needs to explicitly choose a network to share and initiate the process. Again, Microsoft has taken an approach in which the specific privacy and security dangers are hidden in a single setting. There is no way to possibly vet who, amongst several hundred contacts, you really wanted to share your network with. There are steps users can take to mitigate the worst of the privacy issues with Windows 10, and these are highly recommended. Microsoft should have allowed users to pay a regular fee for the product in exchange for a guarantee of the levels of privacy its users deserve. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Last week some old technology reappeared, refreshed for another try at getting it right this time around. Microsoft released Windows 10 and trumpeted as its main feature the return of the Start menu. This had been infamously axed in the previous version, Windows 8. Windows 10 also brings back the desktop as the main interface, relegating Windows 8 live tiles to an extension of the Start menu. The user interface is also a major reversal of the attempt of the previous version of windows to focus on being a mobile platform supporting touch. Now that Microsoft has all but given up on its mobile phone, and its tablet doesn’t even get mentioned in the global sales leader board, the PC is still really where it dominates. Touch screens on PCs never became a thing and so supporting the traditional keyboard and mouse/trackpad arrangement makes much more sense. The other technology that Microsoft effectively killed off – or at least will try never in future speak its name – was its browser, Internet Explorer. Windows 10 introduces Microsoft Edge, a leaner, stripped down version of Internet Explorer. Internet Explorer has become, possibly unfairly, the most universally disliked browser by web developers. This was largely due to the fact that versions of the browser were tied to updates of Windows. Supporting Internet Explorer meant supporting potentially old and outdated versions long after other browsers like Chrome and Firefox had moved on and come to support new standards and features. In fact, the dependency of browser to operating system led companies to become tied to a particular version of Windows because of their reliance on particular versions of Internet Explorer to run their corporate applications. The significance of Microsoft’s move to Edge is that a range of technologies that were once the future of running software in the browser, have disappeared as well. Gone is support for Silverlight, Microsoft’s version of Flash, and a technology called ActiveX, a much earlier attempt by Microsoft to allow for sophisticated applications to be run in the browser. ActiveX, in particular, introduced security concerns and as a consequence was never really widely adopted. Their absence is unlikely to be missed. Right now, users who have rushed to upgrade will be getting the first of many bug fixes as the inevitable problems get ironed out. For companies still on Windows 7, the familiarity of Windows 10 may make it a more tempting option to upgrade, but it is not clear that there is enough of a compelling reason to do so. In the meantime of course, the PC market continues to decline, with more users increasingly relying on mobile devices instead. Google Glass relaunched as a business tool Google has apparently relaunched its Glass wearable computer, but this time aiming it at the business world and not consumers. Google is hoping that if nobody actually sees anyone wearing the devices, it will not attract the same level of “ridicule” and concerns about privacy that the original consumer version did. The new version of Glass has a faster processor and wireless and a longer battery life. It also allows the glasses to fold up which the first version didn’t. Whilst Google’s move may make the wearable attract less negative publicity, it is still hard to see what the particular benefit of Google Glass will ultimately be. The user interface’s limitations mean that it is not a great device to consume content from and its other function as a hands-free video streaming device would be much better handled by something that was portable and worn attached to clothing rather than a person’s face. By limiting the market in this way, it is also hard to imagine that it will actually be much of a revenue generator for Google. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Microsoft’s aim to make Windows 10 run on anything is key to its strategy of reasserting its dominance. Seemingly unassailable in the 1990s, Microsoft’s position has in many markets been eaten away by the explosive growth of phones and tablets, devices in which the firm has made little impact. To run Windows 10 on everything, Microsoft is opening up. Rather than requiring Office users to run Windows, now Office365 is available for Android and Apple iOS mobile devices. A version of Visual Studio, Microsoft’s key application for programmers writing Windows software, now runs on Mac OS or Linux operating systems. Likewise, with tools released by Microsoft developers can tweak their Android and iOS apps so that they run on Windows. The aim is to allow developers to create, with ease, the holy grail of a universal app that runs on anything. For a firm that has been unflinching in taking every opportunity to lock users into its platform, just as with Apple and many other tech firms, this is a major change of tack. From direct to indirect revenue So why is Microsoft trying to become a general purpose, broadly compatible platform? Windows' share of the operating system market has fallen steadily from 90% to 70% to 40%, depending on which survey you believe. This reflects customers moving to mobile, where the Windows Phone holds a mere 3% market share. In comparison Microsoft’s cloud infrastructure platform Azure, Office 365 and its Xbox games console have all experienced rising fortunes. Lumbered with a heritage of Windows PCs in a falling market, Microsoft’s strategy is to move its services – and so its users – inexorably toward the cloud. This divides into two necessary steps. First, for software developed for Microsoft products to run on all of them – write once, run on everything. As it is there are several different Microsoft platforms (Win32, WinRT, WinCE, Windows Phone) with various incompatibilities. This makes sense, for a uniform user experience and also to maximise revenue potential from reaching as many possible devices. Second, to implement a universal approach so that code runs on other operating systems other than Windows. This has historically been fraught, with differences in approach to communicating, with hardware and processor architecture making it difficult. In recent years, however, improving virtualisation has made it much easier to run code across platforms. It will be interesting to see whether competitors such as Google and Apple will follow suit, or further enshrine their products into tightly coupled, closed ecosystems. Platform exclusivity is no longer the way to attract and hold customers; instead the appeal is the applications and services that run on them. For Microsoft, it lies in subscriptions to Office365 and Xbox Gold, in-app and in-game purchases, downloadable video, books and other revenue streams – so it makes sense for Microsoft to ensure these largely cloud-based services are accessible from operating systems other than just their own. The Windows family tree … it’s complicated. Kristiyan Bogdanov, CC BY-SA Platform vs services Is there any longer any value in buying into a single service provider? Consider smartphones from Samsung, Google, Apple and Microsoft: prices may differ, but the functionality is much the same. The element of difference is the value of wearables and internet of things devices (for example, Apple Watch), the devices they connect with (for example, an iPhone), the size of their user communities, and the network effect. From watches to fitness bands to internet fridges, the benefits lie in how devices are interconnected and work together. This is a truly radical concept that demonstrates digital technology is driving a new economic model, with value associated with “in-the-moment” services when walking about, in the car, or at work. It’s this direction that Microsoft is aiming for with Windows 10, focusing on the next big thing that will drive the digital economy. The revolution will be multi-platform I predict that we will see tech firms try to grow ecosystems of sensors and services running on mobile devices, either tied to a specific platform or by driving traffic directly to their cloud infrastructure. Apple has already moved into the mobile health app market and connected home market. Google is moving in alongside manufacturers such as Intel, ARM and others. An interesting illustration of this effect is the growth of digital payments – with Apple, Facebook and others seeking ways to create revenue from the traffic passing through their ecosystems. However, the problem is that no single supplier like Google, Apple, Microsoft or internet services such as Facebook or Amazon can hope to cover all the requirements of the internet of things, which is predicted to scale to over 50 billion devices worth US$7 trillion in five years. As we become more enmeshed with our devices, wearables and sensors, demand will rise for services driven by the personal data they create. Through “Windows 10 on everything”, Microsoft hopes to leverage not just the users of its own ecosystem, but those of its competitors too. Mark Skilton is Professor of Practice at University of Warwick. This article was originally published on The Conversation. Read the original article.
The latest version of Microsoft’s Windows operating system will begin rolling out from Wednesday (July 29). And remarkably, Windows 10 will be offered as a free upgrade to those users who already have Windows 7 and 8.1 installed. That the upgrade is free is an interesting move and comes off the back of much criticism over Windows 8. Interestingly, the software giant has also skipped over any planned version 9 of Windows. So what does this mean for Microsoft and the 1.5 billion people it says use Windows every day? Can the company restore some of the consumer and user confidence it has lost in recent years? Under Satya Nadella’s leadership, Microsoft is transforming itself into a “productivity and platforms company”. This is a bold re-invention of the company as it seeks to secure its future in a market moving steadily towards cloud-based services and mobile devices powered by Google’s Android and Apple’s iOS. Nadella sees it as necessity to broaden the company’s scope of operations beyond its current family of products and conventional modes of delivery. The market does not leave him with much choice if the company is to stay in the game, if not be a leader. After Windows 10 it’s just Windows For decades, the latest release of Windows has been a major event in itself. But that is set to end. Windows 10 will be the last numbered version of the operating system. After Windows 10, it will simply be known as Windows. And you will get your updates incrementally from from the cloud via a subscription service. Many Windows users will have noticed the upgrade notification appearing on their taskbar. Microsoft In what it is calling a “platform convergence strategy”, Microsoft is creating a unified operating environment for phones, tablets, ultrabooks, laptops, desktop computers and Xboxes. All will be integrated by Windows 10, and increasingly so with the later Windows. The platform convergence strategy allows the creation of universal applications that can run on any platform with Windows 10. Surprisingly, applications that have been developed to run on Android and iOS devices will also be able to run on Windows 10, albeit once they have been converted to make them compatible. Still, this will open up a vast number of potential applications to run across Windows platforms. Focus on gaming Microsoft’s acquisition last year of the hit game Minecraft for US$2.5 billion is a measure of how seriously Nadella and his strategists take mobile gaming. Minecraft is a hugely popular open world game that gives players the freedom to create create and manipulate an on-line world made of Lego-like blocks. The move will establish Microsoft in the booming world of mobile games as well as further popularising the Xbox gaming console. But the question on many people’s minds is whether the personal computer itself is dead, and along with it Microsoft? It’s not the first time we have heard such dire predictions. It is true that PCs are today part of a more complicated personal computing environment, but it is a stretch to declare the PC dead. There is only so much you can do with a phone or a tablet. For serious work or fun, a full-spec laptop or desktop is still the machine of choice and will remain so. For example, I am writing this article using a laptop. Microsoft’s latest upgrade of Windows will be free for many users. Flickr/Eric Li, CC BY-NC The new digital economy The Internet of Things is expanding, with embedded sensors and data gatherers becoming pervasive. Open platforms and operating environments that feed data into the cloud and allow people to derive value will be an important part of the new digital economy. With traditional jobs under threat from automation and artificial intelligence, imagination and creativity will be more important than ever. Microsoft’s strategy to diversify and integrate its platform offerings and move its services to the cloud while opening itself up to using its competitor’s apps would seem to be a bold but rational response to the current challenges; one that stands a good chance of succeeding. There will no doubt be loud complaints from those who claim to speak for all of us. But in the end if a computing environment delivers value and allows people to live their lives as they please, then that platform is likely to succeed, particularly when it has the muscle and know-how of a well-established company behind it. How Google and Apple respond will be very interesting, but competition is a good thing. David Tuffley is Lecturer in Applied Ethics and Socio-Technical Studies, School of ICT, at Griffith University. This article was originally published on The Conversation. Read the original article.