Apple is about to open a new front in the ongoing war against online advertising. The new version of its mobile operating system, iOS 9, will support ad blocking by Safari, its mobile web browser. A study by Adobe and pro-advertising company PageFair finds that the popularity of ad blocking extensions in desktop web browsers is responsible for US$22 billion in lost revenue to the websites that host ads. They estimate that there are now 198 million users worldwide actively blocking ads. Amongst 400 users surveyed by the report’s authors, the main reasons cited for using ad blocking software were avoiding privacy abuse by targeted advertising as well as the number of ads encountered when browsing. A typical message from a website about the use of any ad blocking. TheGuardian.com screen grab The practice of trying to guilt users into switching off their ad blocking software when visiting sites doesn’t appear to be working and the display of messages to ad blocking users by web sites has diminished. Ad blocking apps that will be available for Safari on iOS 9 are already being made available to beta testers. One such app, Crystal, not only blocks ads but experiments by the developer has shown that using this ad blocking software speeds up web pages loading in the browser by four times. This also results in a significant reduction in data being used, which is significant on a mobile device using cellular data. Another ad blocking app Purify that is also in beta testing appears to also block ads on YouTube. The stand out, and that’s precisely why so many people block them. Pascale Kinchen Douglas/Flickr, CC BY-SA Ad blocking on mobile is not completely new Ad blocking has been available for some time on Android for users of the Firefox mobile browser and for Google Chrome. In the case of blocking ads by Google Chrome, an app needs to be installed which is not from the Google Play app store. Ad blocking has also been available on Apple devices but have worked by blocking access to certain domains that serve up the ads. AdBlock for example works by pretending to be a virtual private network (VPN) connection and filters out access to specific sites. This of course only works if the list of sites to block is up-to-date. It also doesn’t allow for “whitelists”, which are sites that are allowed through because they are deemed “acceptable”. However, the move by Apple is going to boost ad blocking on mobile dramatically because it is going to make the process of doing so that much easier. This has advertisers, and sites that make money from advertising, increasingly worried because it raises their costs in terms of creating ads that are less intrusive and deemed more acceptable (although this may still not convince the public to view them). Apple’s iOS 9 is due to be released later this year and will include content blocking. Apple For Apple, though, the move to allow ad blocking gives iPhone users a better browsing experience at no cost to Apple. Apple makes no money from online advertising through mobile browsing. And, of course, its own ads that are served up through apps are unaffected by ad blocking software. As a bonus to Apple, the company who is most affected by ads being blocked is Google, which derives 90% of its revenue from advertising. Apple is able to increase the level of privacy it offers its customers without directly getting involved itself and risking annoying companies that rely on revenues from advertising. The advertisers' dilemma Many ads can be deliberately deceptive. Create Meme It is hard to feel sorry for the advertisers and the sites that resort to displaying targeted invasive ads, such as those sold by Google, Facebook, Yahoo and others. These ads are designed to target individuals based on information gathered about them as they use the internet. So not only are they annoying, but they are exploiting people’s privacy. Adding insult to injury, the inclusion of ads slows down web page loads and potentially ends up costing end-users money by using their data allocation. The argument that content providers are only able to provide content based on the exploitation of their visitors is not a good one because it implies that those visitors signed up to an agreement to view ads in exchange for the content. Of course, users generally do no such thing. And given the explicit choice, might easily opt simply not to visit the site. Most users don’t necessarily mind being provided with information that allows them to make a reasoned choice about a product when they have decided to buy it. But advertising that tries to persuade a consumer to buy something they weren’t considering buying is a different matter. Once advertisers do more of the former and less of the latter, perhaps ad blocking will no longer be necessary. David Glance is Director of UWA Centre for Software Practice at University of Western Australia This article was originally published on The Conversation. Read the original article.
Making a hugely successful app like Snapchat or Angry Birds has little to do with luck. There are some consistent patterns among these apps that make it possible to reach that kind of popularity. Even before you get on wireframing your idea, this is an area to study. Here are some of these patterns. They’re Simple Using Instagram requires 3-4 taps. And you’re just sharing and browsing photos. Playing Angry Birds also requires a couple of taps to get the full experience. The list goes on, Snapchat, Twitter, WhatsApp, etc. they’re all built around one feature. Not five features or ten features, just one key feature. The reason simple apps succeed is that to get people adopt something new, it must be easy. The more straightforward and easier it is, the more likely they are to keep using the app. Fewer features also mean broader appeal. It’s harder to satisfy many users if you make your app complex. They Nail The Onboarding The vast majority of users who download an app, only use it once, before they delete or abandon it. Only about 16% try out an app more than twice. The reason behind these demoralizing numbers is that there’s a disconnect between what the user expects from the app and what they feel they’ll get based on their first impression. In other words, they don’t get the app when they first use it. For example, Evernote promises to organize your work but if you fail to understand how you can do it with its app, you won’t feel motivated to use it. So Evernote creates an onboarding experience that educates users on how to achieve their desired goal by using the app. It’s called reaching the AHA moment. The AHA moment is when the user gets the app. Best apps are doing a great job in helping users to reach the AHA moment the first time they use the app. They’re Addictive Have you ever heard someone referring to another person as Instagram addict, Snapchat addict or Facebook addict? As you can see in the chart below, for most apps the average weekly use is one to nine times. Compare that to e.g. ten or twenty times a day for Facebook, SnapChat or AngryBirds. The difference is mind-blowing. It’s not just about getting users to come back; it’s also about getting them back on a regular and frequent basis. For most apps, this never happens. As the chart below shows, most users who start using the app, abandon in less than 12 months. But how do they do it? There’s a thing called the hooked model, first described by Nir Eyal which explains how these apps become addictive. They’re basically designed as a trigger – reward loop, something that makes our brain form some behaviour into a habit if repeated enough times. And how do they make us repeat the cycle? By turning our actions into the next trigger. For example, you post a photo on Instagram and close the app. Next thing that happens is that someone comments on it, you get a notification (trigger) and reopen the app to use it again. They Make Their Users Promote the App While most business schools teach their students about writing marketing plans and setting budgets for advertising, the reality is that apps like WhatsApp, Facebook or Instagram never really made any. They spent close to $0 on marketing. There’s no other way, in fact, can you imagine reaching one billion users by paying for them? No startups has that kind of money. Twitter onboarding To scale most of these apps are designed to make users do the marketing for them. For example, Duolingo allows users to brag about their results on social media and invite friends to compete. Instagram allows cross-posting to other social networks, and many gaming apps bribe users to invite their friends. E.g., asking the user to pay or invite friends to get an extra life. They Focus on a Specific User While successful apps eventually reach the mainstream market, they never target it initially. A big part of growing virally comes from using the network effects of small communities. For example, Uber launched in San Francisco and spread via the word of mouth. Facebook launched on Harvard. Instagram targeted hipsters and foodies initially. Crossing the Chasm theory by G. Moore We can go on with more examples. The point is; mainstream customers are hard to impress by a novelty. The mainstream wants proven products and brands. It’s easier to dominate completely a small community and use their excitement to impress the mainstream customers. This article was originally published on Appster
In today’s mobile media environment, an incredible amount of information is available to every one of us, every minute of every day. With our cell phones close by, we can easily search for answers to trivia questions, word definitions or find the perfect recipe for the confetti eggplant bought at the farmers’ market. When traveling, we have instant access to the conversion rate between the euro and the dollar and can map directions to any location. And then there is all the personalized information posted by our Facebook friends. So, how do we keep up with and understand the wide array of information? How do we integrate this into our lives as we participate in a connected world? And how do we make meaningful additions to these spaces as originators of information in the online venues that matter to us? As researchers of library and information science, we use the term metaliteracy as a way to look at literacy in the social media age. Previously, the usage of the term metaliteracy was mostly in connection with literacy studies. We expand the idea further in our book: Metaliteracy: Reinventing Information Literacy to Empower Learners. We use it as a way to recast information literacy for reflective learning with social media and emerging technologies. So, what exactly is metaliteracy, as we define it? Metaliterate mindset To understand it, let’s consider some common web-based situations that we encounter daily. When browsing the web or scrolling Facebook, you may have noticed the ads that appear often align very closely to searches you’ve performed previously. For instance, after searching for consumer products such as a new sofa, you probably encountered the same exact products and stores you originally sought out. At times, this might be just what you want. But after a while, it might start to feel a bit intrusive. Yes, you can adjust your ad settings to increase the chances that relevant advertisements appear only when you are on Google sites such as YouTube. But did you also know that you can opt out of this feature? Here is where metaliteracy comes into play. A metaliterate learner would always dig deeper into the search process, ask good questions about sources of information, consider privacy and ethical issues, and reflect on the overall experience, while adapting to new technologies and platforms. Filtering in a connected world There is more going on here than we might think we know. For instance, did you know that often the information we see online is being filtered for us, by someone else? Google has been personalizing your search results since 2005 if you were signed in and had your web history enabled. If you were being cautious and didn’t sign in, starting in 2009 they began using 180 days of your previous search activity to accomplish the same thing. Google might call it personalizing, but others see it as constricting. Yes, but how many of us use other search engines? Patrick Barry, CC BY-SA Information filtering, or “filter bubbles,” as author and cofounder of Upworthy Eli Pariser calls it in his TED Talk, can circumscribe the information we see when we conduct those searches. Filtering results in isolated information ecosystems of our own making. What about other search engines? If we are willing to break away from the convenience of Google, we could use other search services. DuckDuckGo and Startpage are just two of several search engines that provide more privacy than some of the big names. For, instance, DuckDuckGo does not engage in “search leakage,” as that firm calls it. It notes that other search engines save not only individual searches, but also your search history: Also, note that with this information your searches can be tied together. This means someone can see everything you’ve been searching, not just one isolated search. You can usually find out a lot about a person from their search history. But worse, search engines may release searches without adequately anonymizing the information, or that information may be hacked. Startpage allows you to funnel your search in a way that obtains Google results without your personally identifiable information traveling along with the query. But, how many of us opt to use a search engine other than Google? Google is still the dominant search engine worldwide. Filtering information So, then, are we weaving our own webs without carefully thinking about the many implications of doing so? Look at how we selectively create and share our experiences on the fly, editing and filtering digital information along the way, and making choices about permissions to view and to share. For instance, imagine the millions of selfies that reflect our individual personas while being shared within a larger social mosaic of interconnected audiences. Sometimes we may not even be aware of who can access our content or how it is distributed beyond our immediate circle of friends. Consider our focused concentration on texting while being in large crowds and ignoring the chance encounters with others or missing the random scenery of everyday experience. Information-filtering is ongoing in all these contexts and is both internally and externally constructed. Empowered contributors What does metaliteracy do? Metaliteracy prepares us to ask critical questions about our searches and the technologies we use to seek answers and to communicate with others. We do not just accept the authority of information because it comes from an established news organization, a celebrity, a friend, or a friend of a friend. Metaliteracy encourages reflection on the circumstances of the information produced. It prepares us to ask whether or not the materials came from an individual or an organization and to determine the reason for posting or publishing it. As part of this process, the metaliterate learner will seek to verify the source and ask questions about how the information is presented and in what format. Metaliterate individuals gain insights about open environments and how to share their knowledge in these spaces. For instance, they are well aware of the importance of Creative Commons licenses for determining what information can be reused freely, and for making such content openly available for others' purposes, or for producing their own content. They also understand the importance of peer review and peer communities for generating and editing content for such sites as Wikipedia, or open textbooks, and other forms of Open Educational Resources (OERs). The truth is that we can all be metaliterate learners – meditative and empowered, asking perceptive questions, thinking about what and how we learn, while sharing our content and insights as we make contributions to society. Trudi Jacobson is Distinguished Librarian at University at Albany, State University of New York.Thomas P Mackey is Vice Provost for Academic Programs at SUNY Empire State College. This article was originally published on The Conversation. Read the original article.
Twitter has been in the news recently, for all the wrong reasons. Business media report that Twitter shareholders are disappointed with the company’s latest results; this follows recent turmoil in the company’s leadership which saw the departure of controversial CEO Dick Costolo and the (temporary) return of co-founder Jack Dorsey until a permanent replacement is found. All this has served to feed rumours that Google, having recently called time on its own underperforming social network Google+, might be interested in acquiring Twitter. From one perspective, this would clearly make sense – social media are now a key driver of Web traffic and a potentially important advertising market, and Google will not want to remain disconnected from this space for long. On the other hand, though, given its chequered history with the now barely remembered Google Buzz as well as major effort Google+, Twitter users (and the third-party companies that serve this userbase) may well be concerned about what a Google acquisition of the platform may mean for them. I had the opportunity to explore these questions in some detail in an extended interview with ABC Radio’s Tim Cox last week. In a wide-ranging discussion, we reviewed the issues troubling Google+ and Twitter, and the difficulties facing any player seeking to establish a new social media platform alongside global market leader Facebook. Here’s the audio: Let us take this conversation further: what if Google did buy Twitter? From my point of view, this could turn out a positive move, if Google treats the platform appropriately (as it did, arguably, with past acquisitions such as Blogger, YouTube, and Google Maps). It’s become very obvious over the past months that Twitter’s stock market listing has been a curse at least as much as a blessing: while it’s raised substantial new capital, of course, it’s also exposed the company to the expectations of shareholders who seem to fundamentally misunderstand what Twitter is or can be. As a platform, Twitter is not and will never be a competitor to Facebook, whatever its shareholders seem to think. Both might be classed under the overall rubric of “social media”, but any direct comparisons constitute a category error: the appeal of a strong-ties, small-world networks platform like Facebook, where we tend to network predominantly with family and friends, is necessarily fundamentally different from that of a weak-ties, large-world space like Twitter, where we can follow – and attempt to strike up conversations with – celebrities, politicians, and other users outside of our immediate networks. That’s a very different kind of social network, with its own unique uses, and it is futile to hope that Twitter will eventually attract the same number of users, or the same user activity patterns, as Facebook. Worse still, to try to reshape Twitter in Facebook’s image by force will almost inevitably kill off the platform. If Google understands this, and treats Twitter appropriately (which probably includes accepting it as a loss leader for the time being), this could well turn the platform’s fortunes around. Twitter’s recognised strengths are as a flat, public, and open network that excels especially in live contexts; Twitter is the place where most recent breaking news stories first broke, and a space where users gather as a temporary public and community to collectively participate in shared experiences from the World Cup to Eurovision. Beyond any marketing hype, it genuinely serves as the pulse of the planet in a great many contexts. This live insight into what news stories and other information are currently hot (and thus should be served as search results, too) may well be valuable enough for Google to fork out a few billion, even if there still doesn’t seem to be a workable model for generating significant direct advertising revenue from the platform. But whoever takes on Twitter, one of the first things the new CEO will need to do is to fundamentally rebuild Twitter’s relationship with those on whom, historically, its successes have most depended: the flotilla of third-party developers and researchers that surrounds the Twitter mothership. As Jean Burgess and I have documented in our contribution to the forthcoming collection Digital Methods for Social Science, those developers – and the early adopters and lead users whom they have served – have made the platform what it is: they developed powerful Twitter clients and tools, and laid the groundwork for the social media analytics approaches that have become crucial for making sense of trends on Twitter and elsewhere. Sadly, though, especially under Dick Costolo Twitter’s relationship with these crucial allies in the promotion of Twitter as a platform and a community soured significantly: abrupt and radical changes to the terms of service of the Twitter API (which govern what data companies and their tools could gain access to) in pursuit of more revenue undermined this crucial third-party ecosystem and stymied further innovation. And if anything, the handful of exceptions from this new, more restrictive régime – such as the Twitter Data Grants for researchers, which supported a total of only six out of 1,300 proposed projects – caused further offence rather than restoring goodwill. Absent any major new investments, a Twitter relying mainly on the support of its shareholders seems unlikely to change tack in this way – it will continue to chase revenue by attempting to commercialise its data, and in the process also continue to alienate the crucial third-party developer community. This is a path of diminishing returns: the data are valuable only as long as there are popular and meaningful applications for Twitter as a platform, but those applications have historically been created by the third-party developers and the power users they support. Freed from the short-term, unrealistic demands of the stock market through an acquisition by Google (or another cashed-up investor), on the other hand, Twitter could dial back its desperate efforts to commercialise its APIs and the data they provide, and return to its original, more permissive data access régime in order to nurture and support new efforts at research and development. Such a shift in policy could well be the shot in the arm Twitter needs to ensure its longer-term survival – but it depends on the intervention of a new benefactor. Is Google ready to play – or is it still too disheartened from its past attempts to enter the social media market? Axel Bruns is Professor, Creative Industries at Queensland University of Technology. This article was originally published on The Conversation. Read the original article.
UK communications regulator Ofcom has released a report that gives a fascinating snapshot of digital society in the UK. It highlights the dominance of mobile, and the centrality of social media in social interactions and relationships. The change has been brought about, not by improvements in fixed broadband but by the availability of larger, more capable phones and faster 4G mobile networks. Phones and 4G are in turn facilitating communication through a variety of channels, especially social media. Bigger phones allow people to do more In terms of the importance of mobile, 33% of UK residents now view their smartphone as the most important device to connect to the Internet compared to 30% who chose their laptop. This switch in preference has come about because of the general increase in the size of phones. The release of Apple’s iPhone 6 and 6S in response to the popularity of Android phones of the same size has helped cement the larger form-factor as a standard. People can now comfortably carry out many of the tasks that would have normally been reserved for a laptop, PC or tablet. 4G is the other key enabler of move to mobile The second reason has been the increase in speed of the average smartphone connection. 45% of UK smartphone users have access to 4G networks, a 28% increase on the previous year. The faster speeds have not only resulted in greater use of mobile data generally but has shifted what users will do with their phones. 4G users are more likely to use their phones to access audio-visual content(57% 4G users compared to 40% non-4G users). They are also more likely to use their phones to make online purchases and use online banking. Faster fixed broadband plays a smaller part What is interesting is that the changes brought about by the increased use of smartphones have had more impact than the increase in speeds of fixed broadband services to the home. 83% of UK premises are able to receive broadband speeds of 30 Mbit/s or higher. 30% of homes have connected to broadband at these higher speeds. Mobile 4G users were less likely to use their home wireless than those not on 4G showing a general trend to “cutting the cord” even in the area of Internet access. Changing communication and social media UK Internet users believe that technology has changed the way that they communicate and that these new forms of communication have made life easier. Traditional forms of digital communication such as email and text messaging are still dominant but 62% of online adults use social media and 57% instant messaging to communicate regularly with family and friends. Technologies such as Skype, Facetime and Google Hangout are also used by 34% of adults. In terms of social media use, Facebook is by far the dominant platform with 72% of adults having a social media profile and 97% of those having one on Facebook. Although teenagers are likely to use other social media platforms, 48% of social media users use Facebook exclusively. People are also spending greater amounts of time on Facebook than any other service. In March of 2015, users spent 51 billion minutes on Facebook’s website and apps compared to 34 billion on Google’s. YouTube was also watched by more people via mobile devices than on desk/laptops. Change, but not in productivity Although digital technologies have brought about a major change in society in the UK, this hasn’t been reflected in any changes in productivity in the UK economy. The UK continues to rate behind France, Germany, US and even Italy in terms of worker productivity. The results of surveys such as these enable several important points to be underscored. The first is that investment in fixed broadband infrastructure is not necessarily as important as investment in universal high speed wireless access in terms of its impact on society. Second, although we may see radical changes in social norms through the use of digital technologies, it won’t show up in increased productivity. The last point has to be qualified however. It may well be that existing businesses do not show any improvements in productivity but new forms of industry and business are enabled by a mobile economy which may well bring about radical changes in productivity. Uber, Airbnb, and other industries as part of the so-called “gig economy” threatens to disrupt industry and this will only be possible through the use of mobile phones and high speed wireless. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Windows 10, it seems, is proving a hit with both the public and the technology press after its release last week. After two days, it had been installed on 67 million PCs. Of course, sceptics may argue that this may have simply been a reflection of how much people disliked Windows 8 and the fact that that the upgrade was free. For others, though, it is the very fact that the upgrade is free that has them concerned that Microsoft has adopted a new, “freemium” model for making money from its operating system. They argue that, while Apple can make its upgrades free because it makes its money from the hardware it sells, Microsoft will have to find some way to make money from doing the same with its software. Given that there are only a few ways of doing this, it seems that Microsoft has taken a shotgun approach and adopted them all. The question is whether it’s really ‘free’. Microsoft Free upgrade Chris Capossela, Microsoft’s Chief Marketing Officer, has declared that Microsoft’s strategy is to “acquire, engage, enlist and monetise”. In other words, get people using the platform and then sell them other things like apps from the Microsoft App Store. The trouble is, that isn’t the only strategy that Microsoft is taking. Microsoft is employing a unique “advertising ID” that is assigned to a user when Windows 10 is installed. This is used to target personalised ads at the user. These ads will show up whilst using the web, and even in games that have been downloaded from the Microsoft App Store. In fact, the game where this grabbed most attention was Microsoft’s Solitaire, where users are shown video ads unless they are prepared to pay a US$9.99 a year subscription fee. The advertising ID, along with a range of information about the user, can be used to target ads. The information that Microsoft will use includes: […] current location, search query, or the content you are viewing. […] likely interests or other information that we learn about you over time using demographic data, search queries, interests and favorites, usage data, and location data. It wasn’t long ago that Microsoft was attacking Google for similar features it now includes in Windows 10. Internet Archicve It was not that long ago that Microsoft attacked Google for doing exactly this to its customers. What Microsoft is prepared to share, though, doesn’t stop at the data it uses for advertising. Although it maintains that it won’t use personal communications, emails, photos, videos and files for advertising, it can and will share this information with third parties for a range of other reasons. The most explicit of these reasons is sharing data in order to “comply with applicable law or respond to valid legal process, including from law enforcement or other government agencies”. In other words, if a government or security agency asks for it, Microsoft will hand it over. Meaningful transparency In June, Horacio Gutiérrez, Deputy General Counsel & Corporate Vice President of Legal and Corporate Affairs at Microsoft, made a commitment to “providing a singular, straightforward resource for understanding Microsoft’s commitments for protecting individual privacy with these services”. On the Microsoft blog, he stated: In a world of more personalized computing, customers need meaningful transparency and privacy protections. And those aren’t possible unless we get the basics right. For consumer services, that starts with clear terms and policies that both respect individual privacy and don’t require a law degree to read. This sits in contrast to Microsoft’s privacy statement, which is a 38 page, 17,000 word document. This suggests that Microsoft really didn’t want to make the basic issues of its implementation absolutely clear to users. Likewise, the settings that allow a user to control all aspects of privacy in Windows 10 itself are spread over 13 separate screens. Also buried in the privacy statement is the types of data Cortana – Microsoft’s answer to Apple’s Siri or Google Now – uses. This includes: […] device location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on your device. Cortana also learns about you by collecting data about how you use your device and other Microsoft services, such as your music, alarm settings, whether the lock screen is on, what you view and purchase, your browse and Bing search history, and more. Note that the “and more” statement basically covers everything that you do on a device. Nothing, in principle, is excluded. Privacy by default It is very difficult to trust any company that does not take a “security and privacy by default” approach to its products, and then makes it deliberately difficult to actually change settings in order to implement a user’s preferences for privacy settings. This has manifested itself in another Windows 10 feature called WiFi Sense that has had even experts confused about the default settings and its potential to be a security hole. WiFi Sense allows a Windows 10 user to share access to their WiFi with their friends and contacts on Facebook, Skype and Outlook. The confusion has arisen because some of the settings are on by default, even though a user needs to explicitly choose a network to share and initiate the process. Again, Microsoft has taken an approach in which the specific privacy and security dangers are hidden in a single setting. There is no way to possibly vet who, amongst several hundred contacts, you really wanted to share your network with. There are steps users can take to mitigate the worst of the privacy issues with Windows 10, and these are highly recommended. Microsoft should have allowed users to pay a regular fee for the product in exchange for a guarantee of the levels of privacy its users deserve. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Microsoft’s aim to make Windows 10 run on anything is key to its strategy of reasserting its dominance. Seemingly unassailable in the 1990s, Microsoft’s position has in many markets been eaten away by the explosive growth of phones and tablets, devices in which the firm has made little impact. To run Windows 10 on everything, Microsoft is opening up. Rather than requiring Office users to run Windows, now Office365 is available for Android and Apple iOS mobile devices. A version of Visual Studio, Microsoft’s key application for programmers writing Windows software, now runs on Mac OS or Linux operating systems. Likewise, with tools released by Microsoft developers can tweak their Android and iOS apps so that they run on Windows. The aim is to allow developers to create, with ease, the holy grail of a universal app that runs on anything. For a firm that has been unflinching in taking every opportunity to lock users into its platform, just as with Apple and many other tech firms, this is a major change of tack. From direct to indirect revenue So why is Microsoft trying to become a general purpose, broadly compatible platform? Windows' share of the operating system market has fallen steadily from 90% to 70% to 40%, depending on which survey you believe. This reflects customers moving to mobile, where the Windows Phone holds a mere 3% market share. In comparison Microsoft’s cloud infrastructure platform Azure, Office 365 and its Xbox games console have all experienced rising fortunes. Lumbered with a heritage of Windows PCs in a falling market, Microsoft’s strategy is to move its services – and so its users – inexorably toward the cloud. This divides into two necessary steps. First, for software developed for Microsoft products to run on all of them – write once, run on everything. As it is there are several different Microsoft platforms (Win32, WinRT, WinCE, Windows Phone) with various incompatibilities. This makes sense, for a uniform user experience and also to maximise revenue potential from reaching as many possible devices. Second, to implement a universal approach so that code runs on other operating systems other than Windows. This has historically been fraught, with differences in approach to communicating, with hardware and processor architecture making it difficult. In recent years, however, improving virtualisation has made it much easier to run code across platforms. It will be interesting to see whether competitors such as Google and Apple will follow suit, or further enshrine their products into tightly coupled, closed ecosystems. Platform exclusivity is no longer the way to attract and hold customers; instead the appeal is the applications and services that run on them. For Microsoft, it lies in subscriptions to Office365 and Xbox Gold, in-app and in-game purchases, downloadable video, books and other revenue streams – so it makes sense for Microsoft to ensure these largely cloud-based services are accessible from operating systems other than just their own. The Windows family tree … it’s complicated. Kristiyan Bogdanov, CC BY-SA Platform vs services Is there any longer any value in buying into a single service provider? Consider smartphones from Samsung, Google, Apple and Microsoft: prices may differ, but the functionality is much the same. The element of difference is the value of wearables and internet of things devices (for example, Apple Watch), the devices they connect with (for example, an iPhone), the size of their user communities, and the network effect. From watches to fitness bands to internet fridges, the benefits lie in how devices are interconnected and work together. This is a truly radical concept that demonstrates digital technology is driving a new economic model, with value associated with “in-the-moment” services when walking about, in the car, or at work. It’s this direction that Microsoft is aiming for with Windows 10, focusing on the next big thing that will drive the digital economy. The revolution will be multi-platform I predict that we will see tech firms try to grow ecosystems of sensors and services running on mobile devices, either tied to a specific platform or by driving traffic directly to their cloud infrastructure. Apple has already moved into the mobile health app market and connected home market. Google is moving in alongside manufacturers such as Intel, ARM and others. An interesting illustration of this effect is the growth of digital payments – with Apple, Facebook and others seeking ways to create revenue from the traffic passing through their ecosystems. However, the problem is that no single supplier like Google, Apple, Microsoft or internet services such as Facebook or Amazon can hope to cover all the requirements of the internet of things, which is predicted to scale to over 50 billion devices worth US$7 trillion in five years. As we become more enmeshed with our devices, wearables and sensors, demand will rise for services driven by the personal data they create. Through “Windows 10 on everything”, Microsoft hopes to leverage not just the users of its own ecosystem, but those of its competitors too. Mark Skilton is Professor of Practice at University of Warwick. This article was originally published on The Conversation. Read the original article.
It is a year since I last wrote about Adobe Flash and why everyone should stop using it. Since then, the leaks from the hack of the mass surveillance company HackingTeam have revealed three serious bugs (called zero-day) bugs) in Flash that they were exploiting to take over victims’ machines. It is likely that more Flash vulnerabilities will be revealed as security researchers work through the documents the hackers removed from the HackingTeam. The leaked exploits have already appeared in hacking toolkits and are presumably already being used on the general public. Since these bugs have come to light, both Mozilla and Google have blocked various versions of Flash from running on their browsers. Other companies are removing Flash from installs on new computers. The momentum behind the movement to rid the web entirely of Flash has picked up with the Facebook’s Chief Security Officer Alex Stamos saying: It is time for Adobe to announce the end-of-life date for Flash and to ask the browsers to set killbits on the same day. The reality is, there really is no reason for Flash to still exist or be supported by modern browsers. Steve Jobs made this point in 2010. Unfortunately, the reason that it still persists is because Adobe still makes money from it, a large number of people can’t be bothered changing how they produce their ads and websites and an even larger number of people are still running versions of software that is too old to run the modern replacement for Flash, HTML 5. The latter group probably also can be split into those who can’t be bothered to upgrade and those who can’t afford to. One has to believe that Flash has become a huge liability for Adobe. Being known as a company enabling a large part of the Internet’s security problems is not good reputationally. However, Flash is still a part of its Creative Cloud product suite and so it seems that any moves to abandon it won’t come from Adobe voluntarily. Usage is decreasing, albeit not fast enough. Flash is still used on around 11% of websites. This is down 2 - 3% from a year ago. The environment has changed however, even from a year ago. Mobile is rapidly becoming the dominant platform for accessing the Internet and these devices don’t run Flash. More importantly, the pervasiveness of government surveillance and cyber-crime in general has become all too apparent, even to the general public. Whilst, surveillance by our own governments may not impact everyone, cyber-crime has become so prevalent that the public is becoming more security conscious. This is being helped in part by companies making security and privacy a bigger part of what they do and simplifying access protection with mechanisms like fingerprint recognition on mobile devices. Another factor is that Flash use is tightly coupled with how annoying and intrusive ads are displayed on websites. Removing Flash may be an inconvenience for accessing a small amount of functionality, but users actively removing and blocking ads has become much more common. As more ads get blocked, the incentives for advertisers to use Flash to create web ads diminishes significantly. If you do want to remove Flash, and as a security measure, it is really advisable to at least limit its use, there are a number of different ways to disable it temporarily or permanently. An added benefit from removing Flash is that you won’t have constant messages asking to update it as daily security flaws are discovered and fixed by Adobe. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Australian rugby league games could be heading online following reports the National Rugby League (NRL) has been in discussions with Google as part of the sporting organisation’s latest media rights. The discussions are said to be associated with having NRL games broadcast via Google’s YouTube video website. These are not the first discussions rumoured to have taken place between YouTube and an Australian sporting organisation. Last year it was said that the Australian Football League (AFL) was in discussions with YouTube, as part of its new media rights deal to start in 2017. It should also be noted that YouTube has made a shift toward professional sports media over the past few years. In 2010 it secured the live-streaming rights of the Indian Premier League (IPL) cricket. Three years later, YouTube began to experiment with major American sports, including Major League Baseball (MLB) and the National Basketball Association (NBA). How would a deal between an Australian sports organisations and YouTube impact Australian sport media rights? International sport media rights Sports media rights are much sort after. The investment banking group Jefferies Group LLC sees sports as vital for TV channels because 97% of all sports programming is watched live. This is evident by the high stakes of the sports media rights globally. In the UK recently Sky and BT paid a record £5.136 billion (A$10.48 billion) for live Premier League soccer television rights, almost doubling the previous £3 billion (A$6.12 billion) deal. The annual amounts paid for sports media rights in the US range from US$1.5 billion (A$1.93 billion) annually for Major League Baseball to US$3 billion (A$3.85 billion) per year for the National Football League (NFL). The NBA’s recent media rights deal of US$2.66 billion (A$3.42 billion) annually more than doubled its previous deal. How does this compare to Australian sport media rights? The AFL’s current media rights, which includes Seven West Media, Foxtel, Fox Sports and Telstra, are valued at A$1.25 billion. The new media rights are expected to reach more than A$2 billion for a five year deal. The current NRL media rights deal with Nine and Fox Sports are valued at A$1.025 billion over five years, just under the AFL. There is a clear gap between the value of Australian sporting media rights and those in the UK and US, which is arguably one factor in YouTube’s interest in the Australian sports market. Could YouTube become a sport broadcaster? Today the online video market is estimated to be worth US$200 billion to US$400 billion (A$275 billion to A$514 billion), with YouTube having the largest share. YouTube currently has more than 1 billion users, has more than 300 hours of video uploaded to its site every minute, is localised in 75 countries and available in 61 languages. It was recently reported that in the US YouTube reached more Americans between the ages of 18 and 34 than any cable channel, including ESPN. There has also been a 50% growth in the amount of time users spend watching videos on YouTube year over year, of which 50% of viewing is via a mobile. The live streams on YouTube have the potential to far outweigh the highest audience ratings of Australian television broadcasters. Felix Baumgartner’s world record free fall skydive, for example, had 8 million simultaneous viewers. VIDEO 1 Who will pay? Advertisers or the users If YouTube was to commence broadcasts of Australian sports, the question is, who will pay? YouTube has a subscription based services already available that would allow Australian sports to charge per game, per month or per year. But how would this impact the current alternative platforms that both the AFL and NRL have? Both have services for mobile and online viewing, part of digital rights deals with Telstra. The AFL’s deal worth A$153 million and the NRL deal worth A$100 million. Any digital rights deal with YouTube would have an impact upon the current approach toward digital rights. A similar deal could be struck as with the IPL, where YouTube “involves every country outside the US”. YouTube could become a digital partner to broadcast AFL and NRL for countries other than Australia, assisting in the internationalisation of the codes. YouTube and new ways to watch sport In addition to the shear reach of YouTube globally, the other area to consider is broadcast technologies and the way in which YouTube has begun to experiment in this area. In recent months Virtual Reality (VR) and 360 degree video, has been a big talking point. Particularly with the release Microsoft’s Hololens and more recently the new Oculus Rift VR headset, now owned by Facebook. Google has also released Google Cardboard which gives anyone with a smartphone a cheap entry to the VR headset. Google also recently announced its Jump camera rig for 360 degree videos, which holds 16 GoPro cameras, costing well over US$8,000 (A$10,280) with cameras. But there are cheaper alternatives to Jump coming into the market. The Giroptic camera is under US$500 (A$643) and the Bubl camera is US$799 (A$1,027), both are smaller than Jump and extremely affordable in comparison. YouTube allows for 360 degree video to be upload to its site, something that has been taken up by artists such as Björk and the Red Bull Formula 1 racing team. The 360 degree effect only works when viewed in Google’s Chrome web browser. Sporting organisations are willing to experiment with new technologies. This is evident by the recent virtual reality content filmed for Samsung’s Gear VR at the NBA’s all-star weekend. The National Hockey League (NHL) also experimented with using GoPro cameras for its all-star weekend to give viewers a point-of-view perspective. Example of NBA All-Star Virtual Reality via a Samsung Gear VR Headset These new ways of viewing video content could have a major impact on the future of sports broadcasts and what the viewer sees on a screen, but does not need to entirely replace the current methods. Future of sports broadcasting In the current media environment it seems that YouTube will not replace the current broadcast of sport. For Australia, the anti-siphoning laws prevent subscription or pay-for view only broadcasting many Australian major sporting events. This would prevent YouTube from having a major impact in the near future of sports broadcasts, but it could shake up the digital rights component. The other factor is Australian viewing habits of television. The current reports still show a strong difference between television viewing and online video viewing habits. What YouTube could do for Australian sports is allow for both the AFL and NRL to be internationalised by making it available to people outside Australia, something that the AFL in particular has been strongly working on. In addition to providing a liner stream, YouTube could be a potential platform for sporting organisation to experiment further with new broadcast and viewing technologies, such as the 360 degree video. Imagine being able to experience being in the crowd at the Melbourne Cricket Ground. A 360 degree video could allow the viewer – both in Australia and overseas – access onto the ground, a fly on the wall perspective, via cameras installed on goal posts or positioned above the ground. YouTube thus does have the potential to lead the way in new forms of sports broadcasting. Marc C-Scott is Lecturer in Digital Media at Victoria University. This article was originally published on The Conversation. Read the original article.
Promoting the value of entrepreneurship to the Australian economy will be the focus of new StartupAUS chief executive officer Peter Bradd. On Friday, StartupAUS announced Bradd, a foundation member of its board, would become the organisation's first ever CEO. Prior to joining StartupAUS, Bradd was a founding director of Sydney co-working space Fishburners and the founder of personalise postcard startup ScribblePics. Bradd says he wants to work to change the perception of entrepreneurship in Australia. “People say things like those entrepreneurs are good at selling the dream and putting their hands out, but what do they really contribute to economic growth?” he says. “People in government ask things like why support technology entrepreneurs when nine out of 10 fail and those that don’t go overseas. I really want to change that conversation. It’s the wrong conversation to be having. PwC estimated tech could create 500,000 jobs by 2034.” Bradd argues that narrative makes it sound like startup founders are segmented from the broader Australia community. “Entrepreneurs are a group of people with similar needs. Innovators across every industry, be it financial services, mining, agriculture, aged care, health services, transport,” he says. “You’ve got people creating apps and websites to aggregate or provide services through tech enablers. People creating high technology, like Wi-Fi, which was created in Australia. Then you’ve got a whole heap of different things. “They need venture capital. A higher percentage of their staff need to have technology skills. They’re entrepreneurial, they need entrepreneurship skills and education. There’s a whole heap of things they all need, but they do work in industry.” Bradd says Australia’s startup ecosystem is growing organically but could do with a push. “Australia is quite far behind and the way that ecosystem’s grow is they need to grow the ball and push it down the hill and it will then pick up speed and size,” he says. “That’s the PayPal effect, and before that the GE effect. The IPOs of Twitter, Facebook and Google created 4000 millionaires. And those 4000 went and created new businesses, they had money, they had knowledge. They knew how to work in a high growth startup and they knew each other. They knew how technology worked and they spawned some amazing companies. “Australia’s ecosystem is growing organically, we just need a bit of support.” StartupAUS also announced that Steve Baxter would retire his position on the board to become the organisation’s chief advocate. Andrew Larsen, investor, and founder of co-working space SyncLabs, joins the board. Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
The race is on to get billions of people connected to the internet via a global network of satellites. Europe’s Airbus announced this week that it is to design and build up to 900 satellites for the privately owned OneWeb Ltd, which includes Richard Branson as a board member. A statement from OneWeb said the plan was to begin launches in 2018 to bring “affordable internet access for everyone” by providing approximately 10 terabits per second of low-latency, high-speed broadband. That estimate of 10 terabits per second may be misleading, though. The broadband access rates experienced by customers are more likely to be in the range of 2 to 50 megabits per second (Mb/s). It is an ambitious move and follows reports that the entrepreneur Elon Musk’s company SpaceX is seeking US government approval of a network of 4,000 satellites to provide similar internet access. Accessing the internet via satellite is nothing new. Our own NBN Co plans to launch a satellite this September to help bring people in regional areas to its high-speed network. But what makes OneWeb and SpaceX’s ventures interesting is their plan to connect people anywhere on the planet, similar to Google’s plan revealed last year. Facebook’s internet.org is another project that aims to make it easier for more people anywhere to connect to the internet. A truly world wide web Only about 40% of the world’s population currently has access to the internet and annual growth has been slowing from from 10.5% in 2013 to 8% in 2013 and 7.9% last year. Any further growth requires cost-effective access such as a global satellite network. With the mass production of micro-satellites, building such a pervasive broadband internet powered by a constellation of satellites opens up many possibilities. It makes business sense for large internet companies such as Facebook and Google to increase access in the developing world. Having benefited from the huge uptake of internet connectivity among developed countries, these companies see an as-yet-untapped market opportunity among those who do not currently have internet access. If other large technology companies hungry for users want to increase affordable internet access, then governments should take advantage of these opportunities. Connecting the unconnected to the internet has many positive advantages for the community. The internet supports development by transforming a younger generation’s ability to acquire knowledge and skills and contribute productively to national growth. It can also help an ageing population to remain active and access cost-effective health care. Connectivity is transforming transport, manufacturing, logistics and environment management. All forms of government can achieve greater efficiency and cost-effectiveness through their citizens being online and connected. Access to digital connectivity is essential in the networked society and it is imperative that there is equitable and universal access throughout the world. Access needs to be affordable The Alliance for Affordable Internet has long highlighted the need to increase access by making the internet affordable to a greater percentage of the global population. Its latest Affordability Report says only 5% of the population of the world’s 49 most underdeveloped countries are online. But for the two billion people living on less than US$2 per day, basic broadband access can exceed 40% of their monthly income. The low income of many regions does not create the necessary demand to drive investment in affordable internet access options. This leaves these communities in a vicious cycle, which is widening the gap between the connected and non-connected. A global satellite network may be one solution to providing such access. But how will it work? Delivering broadband over such a network faces significant challenges in design, deployment and operation of such a global infrastructure. It must also make sure it’s affordable for those from economically disadvantaged or remotely located regions. A large constellation of satellites requires agile and cost-effective backhaul technology to provide interconnections between the satellites to form an extension to the internet. Backhaul refers to the links or network required between satellites and the internet to provide customers with internet access. This can be achieved with laser beams or microwave beams operating at millimetre-wave frequencies. They will also require self-aligning systems to pin-point other satellites and maintain links despite fluctuations in their relative positions. Alternatively, the satellites could form the necessary backhaul by connecting to ground stations suitably connected close to major internet gateways. Either way, satellite networks also need ground stations and internet gateways, which adds to the cost and the complexity of deploying and managing the network. With a large constellation of satellites, we can expect a portion of satellites to be dysfunctional at times. The operators need to factor that into the operation and also account for potential impacts and risks of losing satellites. That is why the OneWeb/Airbus deal is for 900 satellites but a plan to launch only 700. The current cost of satellite-based broadband access may only be within the reach of those living in rural communities of developed countries and for emergency communications. The key question remains whether operators can reduce the cost further by leveraging these early markets to deliver affordable access to the remaining two billion people earning US$2 a day. The world needs connectivity and it is now needed in places where it has been nearly impossible. Micro-satellites could offer real potential that needs to be explored and may fuel a space race once more among the internet companies. This article was originally published at The Conversation.
Technolog is the first in a (mostly) weekly wrap up of the highlights of the technology news and events of the week. These are the tech stories that hopefully are the most relevant to knowing what is likely to have an impact on our daily lives. Former CEO of Nokia, Stephen Elop is fired from Microsoft Microsoft CEO Satya Nadella, this week announced the departure of ex-Nokia CEO Stephen Elop and several other Microsoft executives in a reorganisation of the company that saw the creation of three groups; Windows and Devices, Cloud and Enterprise, and Applications and Services. Whilst at Nokia, Elop arguably destroyed any chances of Nokia remaining relevant in the smartphone world by insisting that all of Nokia’s smartphones move to support the Windows platform instead of Android. Nokia’s death blow came when Elop steered the sale of the smartphone business to Microsoft where Elop then presided over its inexorable journey into obsolescence and the sacking of most of the former Nokia staff. The reorganisation is a good one for Microsoft and will allow them to concentrate on their core strength, namely enterprise software. They are also having increasing success with the move of this software to the cloud. Security Password Manager provider LastPass is hacked Users of the password manager LastPass were advised this week to change their master password after hackers stole users' details including emails from LastPass servers. The hackers did not compromise users’ stored password information itself. It seems unlikely that they will be able to crack the stolen encrypted master passwords with the information they obtained because of the particular security measures LastPass uses. The hack of LastPass showed that even though almost anything can be hacked, how you handle customers afterwards can make all of the difference. LastPass’s fast response and disclosure was praised along with the extensive security measures that they had in place to protect user data in the event of this type of occurrence. Using a password manager is still seen as preferable to using the same password for every account or keeping passwords in Notepad on your computer. Finally, using two-factor authentication with the password manager would still have protected users even if their passwords were compromised and so is still seen as a must with this type of software. 600 million Samsung Phones vulnerable to being hijacked A security researcher this week demonstrated a vulnerability that exists in Samsung phones which allows hackers to send malicious code to install and run on those phones. The vulnerability is specific to Samsung phones, and comes from the way Samsung updates the SwiftKey software embedded in its keyboard on the phone. These updates are not encrypted and Samsung allows code downloaded in this way to get around the normal protections of the Android operating system. Although Samsung has issued an update for this problem, it will depend on phone carriers to actually push it out to customers, and they are typically very slow at doing that. In the meantime, there is little users can do to protect themselves, other than not connect to unprotected Wifi, and this may be a good time for them to consider switching to another brand of Android phone? E3 Game Expo 2015 E3 is the biggest electronic games expo for the games industry held each year in Los Angeles. Upcoming releases of games are announced at the expo along with new games hardware and accessories. There were simply too many announcements to summarise here, but the remake of the first-person shooter game Doom, although stunning in its detail, seemed gratuitously graphic and violent. Another anticipated release was the action role-playing open world game, Fallout 4. Set in post-nuclear apocalypse Boston, the game player can adopt a male or female role, enters a fallout shelter and after 200 years have passed, emerges to explore the world above. What will be interesting about this game is the addition of a device (Pip-Boy wrist mounted computer) that will hold a mobile phone and strap to the wrist of the player, allowing them to interact with the game through that device. Other top upcoming games include Star Wars: Battlefront, Batman: Arkham Knight, Final Fantasy XV and Assassin’s Creed Syndicate. On the console side, Microsoft announced that the Xbox One will support streaming of games to a Windows 10 PC where it will also be able to support Facebook’s Oculus Rift virtual reality headset. Microsoft’s also showed off their own augmented reality headset Hololens being used with Minecraft. The video highlights some of the amazing potential of this technology that will be available in the not too distant future. This article was originally published at The Conversation.
I’m going to say something you might not like to hear. Why? Because it’s probably the easiest way to excuse all of your problems. Not getting enough leads? Sales are sluggish? No people signing up to your newsletter? Haven’t acquired a new user in weeks? Traffic isn’t high enough, we need more visitors. Quick, spend more money on ads and the best SEO guy your limited amount of money can get and make sure MORE people come because that means MORE money, right? Wrong. The answer to your woes is not always the amount of traffic you are getting, rather the amount of conversions coming from this traffic. Conversion rate is, quite simply, your conversions (sales, signups, downloads) divided by the amount of website visitors. So, yes you need a level of traffic you can convert on your website, BUT... All traffic is not good traffic What do we mean by this? Well, let’s say you sell an awesome B2B CRM platform. You’ve spent all this time, money and effort on driving traffic to your sleek, modernist, perfectly-designed landing page. There are so many hits! So many new visitors! We are going to make so much money off this game-changing CRM, finally! And yet only one person converts. Only one person hands over their credit card. One out of thousands. Why? Because the traffic you had driven was not good traffic. It was not qualified. It was not comprised of business owners looking to better manage their customer relations; it was comprised of teenage girls and uni students (NOT your target market). Therefore, NOT helpful and most of all not qualified! And this is why a tonne of traffic is not always the best goal to have. So, how do you become a magnet to your target market, then? Step 1: Do your research. Talk to potential customers. Find out where they hang out, what they read, and what they think about goats grazing their overgrown lawns. Do they google stuff or ask their Twitter and Facebook friends? What words do they use? There are tools for these, by the way. SEMRush and Social Mention is a good start. Step 2: Use this information to your advantage. Hang out where they hang out. If they’ve never heard of Twitter, no point hanging around there either. Be seen where they are. Speak their language. Sooner or later they’ll want to talk to you about your products or services. Also, it’s not enough to push raging rapids of people to your site when it’s like a bucket that has holes, is it? Make sure your bucket doesn’t have holes Buckets? Sorry, what? Your bucket is your website. Imagine filling a bucket with water when it is riddled with 50 holes anyway. Why are you filling it with water? It is pointless. It’s all going to run out again. This holey-bucket could be your website if all you are doing is focusing on traffic instead of optimizing it for your visitors. The water is your traffic in the form of lost leads. Plug those holes. Here are the absolute basic conversion tactics every marketer needs to nail. 1. Make sure your website is super-fast because speed is a killer Did you know that visitors expect your website to load in just 2 seconds? They also tend to abandon a site that isn’t loaded within 3 seconds! 79% of web shoppers who have trouble with website performance say they won’t return to the site to buy again and around 44% of them would tell a friend if they had a poor experience shopping online. And guess what? Kissmetrics says that once you lose a conversion from a visitor, they are almost CERTAIN to pass on the negative experience to their friends and colleagues too. That’s a lot of lead loss due to a simple sluggish load time. 2. Have a compelling value proposition Why would customers use your new CRM platform over well-established brands? Answer that question. 3. Make it easy to find stuff If anything is more than one click away, you’ve lost more than half of your web traffic. Why? Because people are busy and want everything instantly, and they are easily annoyed if they have to work hard. Do some usability testing to eliminate annoying experiences for your customers. Don’t test it as you, test it as them. Their annoyance threshold is much lower than yours! 4. Look like someone that can be trusted Show off your happy customers and what they have to say about your service. An ecommerce site needs to have an SSL Certificate or trust badges that tell people you’re legit and that their information is safe and secure. 5. Do some A/B testing If you change the colour of your “buy now” button to green, do you get more sales? Does a $1 offer work better than “free”? Test it. Test everything. So, which is more important then? That is the question. We say plug your bucket. Optimise your website so you know it will convert. Make sure you have done everything possible on your site to capture your leads. Then focus on driving those few people who want your goats eating their lawn into your intact bucket. And they will convert. Optimise first, drive traffic second. Gary Tramer is CEO at LeadChat – a Melbourne-based tech startup that helps businesses turn their web visitors into hot leads around the clock. For more info, visit www.leadchat.com or tweet @leadchat. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
In This Will Destroy That, also known as Book V, Chapter 2 of Notre Dame de Paris, Victor Hugo presents his famous argument that it was the invention of the printing press that destroyed the edifice of the gothic cathedral. Stories, hopes and dreams had once been inscribed in stone and statutory, wrote Hugo. But with the arrival of new printing technologies, literature replaced architecture. Today, “this” may well be destroying “that” again, as the Galaxy of the Internet replaces the Gutenberg Universe. If a book is becoming something that can be downloaded from the app store, texted to your mobile phone, read in 140-character instalments on Twitter, or, indeed, watched on YouTube, what will that do to literature – and particularly Hugo’s favourite literary form, the novel? Click to enlarge Debates about the future of the book are invariably informed by conversations about the death of the novel. But as far as the digital novel is concerned, it often seems we’re in – dare I say it – the analogue phase. The publishing industry mostly focuses on digital technologies as a means for content delivery – that is, on wifi as a replacement for print, ink, and trucks. In terms of fictional works specifically created for a digital environment, publishers are mostly interested in digital shorts or eBook singles. At 10,000 words, these are longer than a short story and shorter than a printed novel, which, in every other respect, they continue to resemble. Digital editions of classic novels are also common. Some, such as the Random House edition of Anthony Burgess’s A Clockwork Orange (1962), available from the App store, are innovatively designed, bringing the novel into dialogue with an encyclopaedic array of archival materials, including Burgess’ annotated manuscript, old book covers, videos and photographs. Also in this category is Faber’s digital edition of John Buchan’s 39 Steps (2013), in which the text unfolds within a digital landscape that you can actually explore, albeit to a limited degree, by opening a newspaper, or reading a letter. But there is a strong sense in which novels of this sort, transplanted into what are essentially gaming-style environments for which the novel form was not designed, can be experienced as deeply frustrating. This is because the novel, and novel reading, is supported by a particular kind of consciousness that Marshall McLuhan memorably called the “Gutenberg mind”. Novels are linear and sequential, and post-print culture is interactive and multidimensional. Novels draw the mind into deeply imagined worlds, digital culture draws the mind outward, assembling its stories in the interstices of a globally networked culture. For the novel to become digital, writers and publishers need to think about digital media as something more than just an alternative publishing vehicle for the same old thing. The fact of being digital must eventually change the shape of the novel, and transform the language. Click to enlarge Far from destroying literature, or the novel genre, digital experimentation can be understood as perfectly in keeping with the history of the novel form. There have been novels in letters, novels in pictures, novels in poetry, and novels which, like Robinson Crusoe(1719), so successfully claimed to be factual accounts of actual events that they were reported in the contemporary papers as a news story. It is in the nature of the novel to constantly outrun the attempt to pin it down. So too, technology has always transformed the novel. Take Dickens, for example, whose books were shaped by the logic of the industrial printing press and the monthly and weekly serial – comprising a long series of episodes strung together with a cliffhanger to mark the end of each instalment. So what does digital media do differently? Most obviously, digital technology is multimodal. It combines text, pictures, movement and sound. But this does not pose much of a conceptual challenge for writers, thanks, perhaps, to the extensive groundwork already laid by graphic novel. Rather, the biggest challenge that digital technology poses to the novel is the fact that digital media isn’t linear – digital technology is multidimensional, allowing stories to expand, often wildly and unpredictably, in nonlinear patterns. Novelistic narratives - as we currently know them - are sequential, largely predicated on the presence of a single unifying consciousness, and designed to be read across time in the order designated by the author. Last year, this presented a serious problem for many would-be readers of David Mitchell’s short story The Right Sort, a fictional work composed of 280 tweets, sent out in groups of 20, twice a day, for seven days. Frustrated readers complained that they couldn’t catch the tweets - that half the narrative had gone missing in the digital ether. It was basically far more comfortable reading the printed version via the link in the Guardian – where readers found a beautifully turned albeit somewhat conventional short story, whose major concession to its digital environment was that it was composed of short scenic snatches 140-characters long. Another difference between digital and print technologies is that the printed novel encourages private reading, whereas digital readers tend to share their experiences in networked, highly social environments. Click to enlarg Today, even authors of traditional novels are expected to maintain an online presence for themselves. In order to publish a book, you need a hashtag, a Facebook page, a blog tour, a book trailer on Vimeo or YouTube and a Twitter account. Much was made of the potential for this type of media to supplement a novelistic text when Richard House’s “digitally augmented” thriller The Kills (2013) was long-listed for the Man Booker Prize. But potential exists for this kind of interaction to go beyond merely “augmenting” a novel, to integrating with, and actually expanding it. In the not-too-distant future, digital novels will find themselves expanding horizontally across platforms, and readers may well be finding themselves interacting with, transforming, and even contributing the content. This may well be the moment when the walls of literature (as we know it) come tumbling down. Yet, the scathing critic might do well to remember that Dickens was revolutionary in his day, not only for charting the course of serialisation, thereby making literature popular and accessible, but also for making ordinary people the subject matter of his writing. One novel that gives you a glimpse of what the digital novel might turn out to be is The Silent History (2014), created by Eli Horowitz – best known as an editor at the New York based literary journal McSweeneys – in collaboration with Matthew Derby and Kevin Moffett. Click to enlarge The story is set in the second quarter of the 21st century, when children begin to be born who fail to develop the necessary cognitive functions to acquire, understand or use language. The prose is dazzling, the characters, and their predicaments are moving, and just as importantly, the digital aspects of the book are set deeply in its design. They are not only present in its themes – though these aptly deal with the problem of communication – but also in its collaborative structure, and interactive details. The Silent History is available in print and ink, but it was originally developed as an app. The written sections of the text – called “Testimonies” – which contain the main trajectory of the story, were uploaded sequentially, along with a variety of mixed-media elements, including video and photographs. One of the striking aspects of the work is its capacity to grow through user-generated content. The writers gradually expanded the “Testimonies” through the inclusion of “Field Reports” – that is, short narratives submitted by readers and other writers. These can only be unlocked using the map on your mobile or tablet device at a specific location – a bit like a GPS-activated and endlessly proliferating instalment of Dickens. The digital novel wasn’t on show at the Sydney Writers’ Festival last week, but there are clear signs that this counter cultural curiosity is edging its way into the literary mainstream. But the wary can rest assured. Despite Hugo’s protestations, architecture wasn’t destroyed by the printing press. It was only transformed. So too, the novel isn’t over yet. This article orignially appeared on The Conversation.
Facebook has launched a ‘lite’ version of its Android app that works well on poor networks and outdated phones and is designed specifically for the developing world. TechCrunch reports the app doesn’t offer data-intensive features like videos or Nearby Friends. These stripped-back features allow Facebook users to access the app quickly and cheaply from some of the most remote corners of the globe. It’s part of a long-running push from Facebook to grab users from the developing world. Five years ago it launched Facebook Zero, a text-only version of Facebook that aimed to encourage people to use the internet. Recently it’s been running Internet.org a program which plans to bring five billion people online. Uber celebrates 5th birthday Transportation network startup Uber has celebrated its fifth birthday where founder and chief executive officer Travis Kalanick outlined his broad vision for the company. The Verge reports Kalanick told the audience in plain terms Uber wants to make its service so inexpensive that it’s not just cheaper than owning a car, but even public transport. TechCrunch writer joins 500 Startups TechCrunch writer Ryan Lawler is joining 500 Startups as venture partner. He describes the decision to move from TechCrunch to 500 Startups as choosing from “two delicious pieces of cake”. Overnight The Dow Jones Industrial Average is down 170.69 to 17,905.58. The Australian dollar is currently trading at US77 cents. Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
Augmented reality startup Magic Leap has announced it is launching an augmented reality developer platform, according to TechCrunch. Last year Google invested more than $600 million in the company, which says it can project light and graphics into the human eye alongside what it sees naturally. Speaking at the MIT EmTech Digital conference, Magic Leap chief executive Rony Abovitz said the company was ready to start training developers. “We’re out of the R&D phase and into the transition to real product introduction,” he said. “There is no off-the-shelf stuff that does what we’re describing.” Native Americans and domestic violence survivors protest Facebook’s naming policy Native Americans, domestic violence survivors, drag queens and others have come together to protest at Facebook’s headquarters in response to the company’s policy of only allowing people to use their “real” names. Fairfax reports more than 50 protesters held up picket signs and chanted outside Facebook’s headquarters in Menlo Park, California. Facebook says people need to use their real names in order to prevent instances of bullying or inappropriate behaviour on the social network, but has softened that position after complaints from the LGBTIQ community. However, the protesters said they wanted Facebook to do more by not putting the onus on vulnerable groups to prove their identity. Apple chief lashes out against companies that compromise customer privacy Apple’s chief executive Tim Cook has lashed out at companies that make a trade-off between customer privacy and security, according to TechCrunch. Speaking at the EPIC Champions of Freedom event in Washington, Cook said people have the fundamental right to privacy. “I’m speaking to you from Silicon Valley, where some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information,” he said. “They’re gobbling up everything they can learn about you and trying to monetize it. We think that’s wrong. And it’s not the kind of company that Apple wants to be.” Overnight The Dow Jones Industrial Average is down 28.43 points, falling 0.16% to 18,011.94. The Aussie dollar is currently trading at around 77.72 US cents. Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
Spending management company Coupa Software has raised $US80 million, making the cloud startup the latest fast-growing businesses to land a valuation of more than $1 billion. The found was led by T Rowe Price along with Iconiq Capital, the firm managing Facebook founder Mark Zuckerberg and Twitter founder Jack Dorsey’s personal investments, according to Re/code. The latest capital injection is the startup’s seventh investment round, bringing the total capital raised to date to $US169m. Microsoft buys German startup for more than $100 million Microsoft will purchase the German startup behind the Wunderlist to-do list app for more than $100 million, according to The Wall Street Journal. The acquisition is part of a bid to improve Microsoft’s mobile, email and calendar applications. The Berlin-based 6Wunderkinder GmbH is backed by US venture capital firm Sequoia Capital and other VCs. Apple to launch music streaming service Apple is launching its own music streaming service to compete with Spotify, according to The Wall Street Journal. The new Apple service will reportedly cost $10 a month; however, unlike Spotify, the company will not allow users to stream its entire catalogue. The move represents a significant shift away from the downloading model that helped Apple revolutionise music a decade ago. Overnight The Dow Jones Industrial Average is up 29.69 points, rising 0.16% yesterday to 18,040.37. The Aussie dollar is currently trading at around 76.08 US cents. Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
Advertising represents 90% of Google’s $USD 66 billion in annual revenue. What makes this figure especially astounding is the suggestion that the vast majority of web advertising may never actually be seen by end users, and if it is, is largely ineffective. Google, Facebook, and other online advertising companies, are less worried about the effectiveness of ads and are more concerned that they show the ads to users in order to bill their customers. This business is being made harder however as a result of the rise in the use of ad blocking software in the past 10 years. Ad blocking Ad blocking software basically applies a list of filters to every web page that is loaded and either hides the ad from showing, or blocks the request to the site that hosts the ad. Not all ads are blocked however. Depending on the software, some sites and their ads can be “whitelisted” and allowed to show. Adblock Plus for example, applies a principle that it is only “bad” advertising that should be blocked. Bad ads are those that do not fit a set of criteria that advocates that ads should not be annoying, intrusive, should not distort the page that they are embedded in, and are appropriate for the context. Companies that produce “good” ads can then pay Adblock Plus to have their ads whitelisted. uBlock, another popular ad blocking software plugin for browsers, simply blocks anything that looks like advertising. Ad blocking software is now used by 25% of users in the US, increasing to 40% of users in countries like Germany. Native advertising Partly in response to the decline in effectiveness of traditional online ads, advertisers have turned increasingly to native advertising. Native ads have been around for some times. They can span from the placement of products in a TV show or film, through to sponsored articles or posts on social media that are written to look like they are part of the normal content of the site. Sponsored tweets on Twitter and posts on Facebook have become more prevalent and have been shown to be far more effective than banner or search-based ads. However, even though these types of ads are more effective, those on social media at least, are still considered not terribly so. Here again, users are turning to software to remove native ads. On Facebook, products like Facebook Purity give the user a high degree of flexibility in removing advertising from Facebook on the web. Advertisers in Germany decided to tackle the problem of ad blocking head on by taking Eyeo, the company behind Adblock Plus, to court. This avenue was shut down, at least in Germany, when the Munich court ruled that ad blocking was indeed legal. The threat to mobile If matters on the web were not bad enough, advertisers face an even bigger problem on mobile. Several telecommunications companies are in the process of deploying ad blocking software to their mobile networks. One technology that they are using is from an Israeli startup called Shine that works at the level of the mobile network to block ads. From the mobile carrier’s perspective, advertising has been estimated to use anything between 10% and 50% of a mobile phone users bandwidth which ultimately the end user is paying for. Ironically however, the telecommunications companies are possibly seeking part of that income by forcing companies like Google to pay them a share for ads transmitted on their networks. Even if this doesn’t succeed, the carriers can switch on ad blocking to enhance the service for their users or charge them for an ad-free environment. An ad-free world? If ads are successfully blocked in the mobile world, which is seen as becoming the dominant platform for advertising in the future, Google, and others, contend that they will need to start charging for software that has been provided for free but funded through advertising. From an end user perspective, moving from a free service to a paid one would actually be a good thing because it would make the relationship between the user and companies like Google more explicit. It is hard to argue that it is acceptable to secretly track users and capture their personal details when the user is paying for a service. Whether this model is going to be as financially lucrative for Google as selling ads is yet to be seen. Advertising, even invasive and disruptive ads ones is not going to disappear overnight. In-app advertising provided on Android and Apple devices is going to be harder to block, even at the mobile carrier level. Advertisers will move increasingly to native formats which are harder to block. In the online landscape where competition is global and there are only a few gatekeepers who control the platform for advertising, it is hard to see how to arrive at a solution that will keep all parties in the equation happy. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.
Above: GoPro's Jump-ready 360 camera array uses HERO4 camera modules and allows all 16 cameras to act as one. Key announcements by Google at its annual Google I/O developer conference have put virtual reality on the cusp of going mainstream, according to two key figures in the Australian virtual reality community. During the conference, Google unveiled a new virtual reality ecosystem known as Jump, as well as new camera arrays that can capture 360-degree vision, and streaming stereoscopic virtual reality videos on YouTube. Virtual Reality Ventures managing director Stefan Pernar told StartupSmart the commodification of virtual reality is likely to happen over the next 12 to 18 months. “The big news from Google I/O is a virtual reality targeted Go Pro rig, one in a six-camera version and one in a 16-camera version, along with the Jump initiative. There’s also a set of assembly tools that allows you to stitch the thing together and extract 3D info from a range of perspectives,” Pernar says. Pernar says that creating and sharing a virtual reality experience will soon be as easy as buying a GoPro rig and uploading the video clip on to YouTube 360. He also points out that a string of consumer virtual reality headsets are set to hit the market in late 2015 and the first half of 2016, including Sony Morpheus, the HTC/Valve device and the consumer version of Facebook’s Oculus Rift. Samsung is also likely to heavily push its Gear VR headset in the lead-up to Christmas. “That’s what Facebook buying Oculus was all about – sharing your user-created virtual reality experience with your family and friends on Facebook,” Pernar says. “The landscape is changing. Virtual reality company’s focus will not so much be on stitching video together. That problem has been solved. “Instead, they will need to add value, in the form of interactive and value added content.” According to Pernar, creating more professional virtual reality shoots is a potential growth opportunity for virtual reality producers. “The smartphone camera in your pocket is now better than the cameras used to shoot Star Wars. So why doesn’t everyone just whip out their phone and shoot Star Wars? It’s not just a matter of hardware – there’s also cinematography. “A year ago, virtual reality was just a tech process. Even simple, stationary shots were state-of-the-art. Now you need to manage movement.” The announcements mark a very exciting time for the VR industry, according to Anton Andreacchio – whose virtual reality production company Jumpgate Virtual Reality recently released a 90-second trailer for the upcoming Cyan Films horror film Scare Campaign. “It’s what happened with film – it’s democratisation. On the tech side, equipment isn’t cost-prohibitive anymore. From a hardware perspective, this solves the post-production issue of stitching video clips together,” Andreacchio say. “This is another big step, the process of making [VR] a systematic production method, and getting big companies a clearer path. As for Pernar’s sentiments about VR going mainstream, Andreacchio is cautiously optimistic. “I know I should say yes 100%, but we don’t know. But certainly a lot of time, money and energy is being thrown at this at the moment. So I suspect we are, but ultimately that’s up to the consumers,” he says. Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.
New user onboarding turns signups into successful users as quickly and easily as possible. Done well, it generates excitement about future use, provides learning by using the product, and takes care of basic configuration in a fun and engaging way. As a designer on the Atlassian Growth team, one of my responsibilities is coming up with these onboarding flows for us to test in our products. We run A/B experiments to determine what ideas will increase various success metrics like conversion, monthly active users (MAU), and usage minutes. Popularised by consumer products like Facebook & Twitter and the work that Samuel Hulick has been doing, user onboarding is taking the product design world by storm. It seems every startup in the valley is focused on delivering an incredible onboarding and setup flow to beat all others, all in the name of increasing MAU. However, I believe if a product is intuitive by default it should require no onboarding flow at all. The days of RTFM are gone; the best products don’t require hand-holding to understand core functionality from the get-go. Which begs the question, what makes a product intuitive? 1. A clear value proposition A whisk is a steel instrument. A bowl is a concave surface. A chicken egg is the unfertilised gamete laid by a female bird. This is listing and describing features. Instead what you need to do is show your customer how she can make an omelette. (Thanks to Jay for the great metaphor.) If you haven’t heard of the Jobs-to-be-Done framework from Clayton Christensen, I suggest you check it out. In short, customers ‘hire’ your product for a job they want done, and you need to succinctly communicate how your product will do this job for them. In the example above, our customer is after an omelette, so you need to provide not just the tools and ingredients, but also the recipe so she gets what she’s after. 2. Unambiguous terminology Do you introduce a lot of bespoke terminology? Are you using words which most people associate with a different definition? Unless you’re Apple, you probably can’t invent a new word or change the definition of existing words. We have a product called Confluence which is a collaborative wiki and knowledge base. It’s kind of a hybrid between Google Docs, Wikipedia, and StackExchange for your company. Everything in Confluence lives within multiple ‘spaces,’ kind of like folders on a computer. The problem with the word ‘space’ is that it’s incredibly abstract and has many different definitions. Customers trialling Confluence are confused when they encounter it, so our onboarding flows spend a lot of time explaining what we mean by the term. Terminology like Share, Menu, Project, and Avatar all have distinct definitions within the context of software (and associated well-known symbols). When people interact with these words or icons in your interface, they expect one thing and are confused when you show them something else. Ensure your definition and usage is the same as what people expect, and try to use as little bespoke terminology as possible. 3. Well-designed affordances “An affordance is often taken as a relation between an object or an environment and an organism, that affords the opportunity for that organism to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.” — Wikipedia Put simply, affordances are the things your users interact with. Buttons, gestures, drop-downs, inputs. We have a Breville toaster and a Breville microwave at home. One thing I love about the toaster is this ‘A bit more’ button: This button is neat because it explains what it will do in a simple and human way, it is physically a button that can be pressed, it looks like a button differentiated against its surroundings, and it has a ring light that shows you when it’s on. It’s quite straightforward and simple. Contrast that with our Breville microwave: What’s the difference? The affordances on the toaster are clear and intuitive. I know what they’ll do; I don’t need a manual to tell me. The microwave on the other hand is interface voodoo: a myriad of bespoke icons on the display, inconsistent long presses and short presses, a multi-function contextual knob, and complex button labels. What does Time Weight Defrost do? Or the button simply labeled Microwave? It even has an oddly-specific, dedicated Potato button! Why? Does anyone cook potatoes in their microwave? It sounds obvious but it’s often overlooked: make sure your product has intuitive affordances like the toaster, and not the microwave. 4. Sane defaults There’s an old computer science saying that a computer should never ask you a question that it can answer for itself. In most cases your product should be able to determine a lot of configuration on its own; for example the user’s timezone and location. In the case of new users, you absolutely know that people don’t start with their own data in your product. When you create a new email account, it usually doesn’t come with a bunch of emails in your inbox. Your users will be looking at a lot of blank screens. Make sure you have well-designed empty states or sample data to catch them and highlight their next steps. For times when the product can’t easily know how to set itself up, rely on qualitative research and analytics to inform your default configuration. If you know most people have four steps in their workflow, the default column count on your task management app should be four. If you know most of your audience are signing up to design a newsletter, surface that above other options, like designing a birthday card. If you know most people are using two of your products together, perhaps connect and integrate them both by default. 5. Meeting expectations You have the ability to affect a potential customer’s expectations before they log in for the first time: you control the advertisements you run, the content on your marketing website, the emails you send, and what you post on your Facebook page. So if you’re saying one thing on your marketing website (“our product is great for thing x!”) and then don’t follow through in-product, you’re going to lose that customer because you’ve built up their expectations and then demolished them. How do you begin to address this? Conduct an audit of your whole customer journey from initial impression right through to success (whatever that is) and make sure you’re sending the same consistent message about what your product does the whole way through. This exercise is also a great way to surface differences of opinion in your organisation. Your marketing department might think your product does one thing well, whereas the product managers might disagree! You now have a good base for making your product intuitive. Once you feel that you’re satisfying most of the points I’ve outlined here, start iterating on the new-user onboarding experience through experimentation to further optimise each of these points. But if you’ve been nodding your head the whole way and saying, “yep, our product isn’t here yet”, then perhaps go back to the drawing board and aim to make your product more intuitive by default before throwing everything you have into new-user onboarding experiments. A/B experiments are useful for turning a good product into a great one through iteration, but not that useful when your product has a long way to go before it’s somewhat intuitive. Make sure you’re starting off at a certain level before iterating through experimentation, and remember, the best onboarding is an intuitive product by default. Benjamin Humphrey is a designer at Atlassian. If you enjoyed this post and want more of the same, you might consider following Designing Atlassian! This article first appeared on Medium.