cellular telephone

Latest

Your mobile phone knows where you go and what you do – and maybe even when you're feeling down

8:38AM | Tuesday, 4 August

 Today’s smartphones are equipped with powerful sensing capabilities. Using these sensors, your smartphone potentially has a record of how active you are, how much you sleep and where you go. If we look at the data those sensors gather, we can get a pretty good idea of what someone’s typical behavior is like.   When a person is depressed, their behavior often changes. You may lose interest in activities, experience changes in your sleep cycles or withdraw from social interactions. And your phone, typically close at hand, could be used to detect these behavior changes.   In a study recently published in the Journal of Medical Internet Research, we investigated whether a person’s movements and activities as recorded by their smartphone signaled behavioral changes associated with depression. And we found that they are, in fact, closely correlated.   How did we use smartphones to detect depression? We recruited 28 participants, 14 with depressive symptoms and 14 without. We started the experiment by quantifying their depressive symptoms by using a test called patient health questionnaire (PHQ-9). The PHQ-9 consists of nine questions asking about the presence of several symptoms of depression such as loss of interest, hopelessness, changes in sleep, tiredness and having trouble in concentration. It’s a very common test. In fact, you might have taken it at your last doctor’s appointment.   Then we collected data on GPS location and phone usage recorded by the built-in sensors on each participant’s phone for two weeks. We also developed a sensor to calculate how long and how often participants used their phones. It tracked all phone activity except for calls.   Then, we developed algorithms that estimated certain behavioral markers that we thought might be related to depression. These markers included the patterns of movement though geographical space, the total distance a person moved during the two-week period, the number of locations visited, the speed at which the individual moved between locations, and the amount of time he or she spent in different locations.   Finally, we analyzed the relationship between these markers and the severity of depressive symptoms.     People with depression spent more time in fewer places. Woman with smartphone via www.shutterstock.com.   Which behaviors can identify depression? We found that a number of behavioral markers strongly correlated with PHQ-9 depression scores. These included markers that captured patterns of movement, mobility, the time spent in different locations, and phone usage duration.   Participants who were more depressed had more irregular movement patterns. This means that, for example, they left home for work at a different time each day, while less depressed individuals went to work around the same time every day.   In addition, the more depressed participants were less mobile and spent most of their time in fewer locations. We also found a correlation between phone usage and depression scores. The more depressed participants used their mobile phones more often and for longer periods of time, but not for making phone calls. This activity may have included texting, playing games, reading or other activities.   Pushing mHealth beyond treatment to diagnosis This study used a relatively small sample, but still an interesting piece of evidence on how mobile phones could detect symptoms of depression.   Another study from Dartmouth College used mobile phone sensors to look into several aspects of students’ lives, and also found a number of them, including sleep, sociability, and physical activity, to be correlated with depression. We still need to see what happens with a larger group of people to see what daily-life behaviors are related to depression in the general population.   As mobile phones have become more ubiquitous, they have become important tools for health care. This is called mobile health, or mHealth for short.   mHealth interventions are effective, and are part of national health care systems in many European countries and Australia.   mHealth is sometimes used to assist with diagnosis. For example, in mobile telemedicine, patients can provide information, such as a picture of a skin injury, to their doctors using their mobile devices.   In mental health care, mHealth has been used to monitor mental health patients by sending them daily questionnaires about their mood and daily activities either through SMS or specialized smartphone apps.   However, without human support from therapists or coaches, patients tend not to use these tools. In addition, patients repeatedly need to input data about their mood and behaviors, mostly a few times per day, which is a major factor in their non-adherence to the treatment procedure.   For people at risk of depression, our research means that their health can be passively monitored without any burden on their side. They don’t need to input data about their mood, daily activities or sleep quality, and care providers can check in if they see a behavior that needs more personal support.   In addition, mobile phone data could also help clinicians understand how depressive symptoms and depression change over time. This could help us develop better treatments or strategies to help people with depression. Depression is fairly common – about 6.9% of US adults have at least one major depressive episode each year – so this could really make a difference.   More than two-thirds of all depressed patients want psychological support, but more than 70% of them face barriers such as high costs, transportation, stigma concerns and lack of motivation that make it hard to access traditional psychotherapy.   mHealth can help overcome these barriers by eliminating the need to have regular, usually costly, visits with the therapists and the need to transport, and provide care to those in need in place.   Sohrob Saeb is Research Fellow, Center for Behavior Intervention Technologies at Northwestern University. This article was originally published on The Conversation. Read the original article.

Technolog: If at first you don't succeed, try again: Windows 10 and Google Glass

8:31AM | Monday, 3 August

Last week some old technology reappeared, refreshed for another try at getting it right this time around.   Microsoft released Windows 10 and trumpeted as its main feature the return of the Start menu. This had been infamously axed in the previous version, Windows 8. Windows 10 also brings back the desktop as the main interface, relegating Windows 8 live tiles to an extension of the Start menu.   The user interface is also a major reversal of the attempt of the previous version of windows to focus on being a mobile platform supporting touch. Now that Microsoft has all but given up on its mobile phone, and its tablet doesn’t even get mentioned in the global sales leader board, the PC is still really where it dominates. Touch screens on PCs never became a thing and so supporting the traditional keyboard and mouse/trackpad arrangement makes much more sense.   The other technology that Microsoft effectively killed off – or at least will try never in future speak its name – was its browser, Internet Explorer. Windows 10 introduces Microsoft Edge, a leaner, stripped down version of Internet Explorer. Internet Explorer has become, possibly unfairly, the most universally disliked browser by web developers. This was largely due to the fact that versions of the browser were tied to updates of Windows. Supporting Internet Explorer meant supporting potentially old and outdated versions long after other browsers like Chrome and Firefox had moved on and come to support new standards and features.   In fact, the dependency of browser to operating system led companies to become tied to a particular version of Windows because of their reliance on particular versions of Internet Explorer to run their corporate applications.   The significance of Microsoft’s move to Edge is that a range of technologies that were once the future of running software in the browser, have disappeared as well. Gone is support for Silverlight, Microsoft’s version of Flash, and a technology called ActiveX, a much earlier attempt by Microsoft to allow for sophisticated applications to be run in the browser. ActiveX, in particular, introduced security concerns and as a consequence was never really widely adopted. Their absence is unlikely to be missed.   Right now, users who have rushed to upgrade will be getting the first of many bug fixes as the inevitable problems get ironed out. For companies still on Windows 7, the familiarity of Windows 10 may make it a more tempting option to upgrade, but it is not clear that there is enough of a compelling reason to do so. In the meantime of course, the PC market continues to decline, with more users increasingly relying on mobile devices instead.   Google Glass relaunched as a business tool Google has apparently relaunched its Glass wearable computer, but this time aiming it at the business world and not consumers. Google is hoping that if nobody actually sees anyone wearing the devices, it will not attract the same level of “ridicule” and concerns about privacy that the original consumer version did.   The new version of Glass has a faster processor and wireless and a longer battery life. It also allows the glasses to fold up which the first version didn’t.   Whilst Google’s move may make the wearable attract less negative publicity, it is still hard to see what the particular benefit of Google Glass will ultimately be. The user interface’s limitations mean that it is not a great device to consume content from and its other function as a hands-free video streaming device would be much better handled by something that was portable and worn attached to clothing rather than a person’s face.   By limiting the market in this way, it is also hard to imagine that it will actually be much of a revenue generator for Google. David Glance is Director of UWA Centre for Software Practice at University of Western Australia. This article was originally published on The Conversation. Read the original article.

Technolog: Elop fired, LastPass hacked, Samsung can be hacked, and new games at E3

6:00AM | Friday, 19 June

Technolog is the first in a (mostly) weekly wrap up of the highlights of the technology news and events of the week. These are the tech stories that hopefully are the most relevant to knowing what is likely to have an impact on our daily lives.   Former CEO of Nokia, Stephen Elop is fired from Microsoft   Microsoft CEO Satya Nadella, this week announced the departure of ex-Nokia CEO Stephen Elop and several other Microsoft executives in a reorganisation of the company that saw the creation of three groups; Windows and Devices, Cloud and Enterprise, and Applications and Services.   Whilst at Nokia, Elop arguably destroyed any chances of Nokia remaining relevant in the smartphone world by insisting that all of Nokia’s smartphones move to support the Windows platform instead of Android. Nokia’s death blow came when Elop steered the sale of the smartphone business to Microsoft where Elop then presided over its inexorable journey into obsolescence and the sacking of most of the former Nokia staff.   The reorganisation is a good one for Microsoft and will allow them to concentrate on their core strength, namely enterprise software. They are also having increasing success with the move of this software to the cloud. Security Password Manager provider LastPass is hacked   Users of the password manager LastPass were advised this week to change their master password after hackers stole users' details including emails from LastPass servers. The hackers did not compromise users’ stored password information itself. It seems unlikely that they will be able to crack the stolen encrypted master passwords with the information they obtained because of the particular security measures LastPass uses.   The hack of LastPass showed that even though almost anything can be hacked, how you handle customers afterwards can make all of the difference. LastPass’s fast response and disclosure was praised along with the extensive security measures that they had in place to protect user data in the event of this type of occurrence.   Using a password manager is still seen as preferable to using the same password for every account or keeping passwords in Notepad on your computer. Finally, using two-factor authentication with the password manager would still have protected users even if their passwords were compromised and so is still seen as a must with this type of software. 600 million Samsung Phones vulnerable to being hijacked   A security researcher this week demonstrated a vulnerability that exists in Samsung phones which allows hackers to send malicious code to install and run on those phones. The vulnerability is specific to Samsung phones, and comes from the way Samsung updates the SwiftKey software embedded in its keyboard on the phone. These updates are not encrypted and Samsung allows code downloaded in this way to get around the normal protections of the Android operating system.   Although Samsung has issued an update for this problem, it will depend on phone carriers to actually push it out to customers, and they are typically very slow at doing that. In the meantime, there is little users can do to protect themselves, other than not connect to unprotected Wifi, and this may be a good time for them to consider switching to another brand of Android phone? E3 Game Expo 2015   E3 is the biggest electronic games expo for the games industry held each year in Los Angeles. Upcoming releases of games are announced at the expo along with new games hardware and accessories. There were simply too many announcements to summarise here, but the remake of the first-person shooter game Doom, although stunning in its detail, seemed gratuitously graphic and violent. Another anticipated release was the action role-playing open world game, Fallout 4. Set in post-nuclear apocalypse Boston, the game player can adopt a male or female role, enters a fallout shelter and after 200 years have passed, emerges to explore the world above.   What will be interesting about this game is the addition of a device (Pip-Boy wrist mounted computer) that will hold a mobile phone and strap to the wrist of the player, allowing them to interact with the game through that device.   Other top upcoming games include Star Wars: Battlefront, Batman: Arkham Knight, Final Fantasy XV and Assassin’s Creed Syndicate.   On the console side, Microsoft announced that the Xbox One will support streaming of games to a Windows 10 PC where it will also be able to support Facebook’s Oculus Rift virtual reality headset. Microsoft’s also showed off their own augmented reality headset Hololens being used with Minecraft. The video highlights some of the amazing potential of this technology that will be available in the not too distant future.     This article was originally published at The Conversation.

The byte may destroy the book but the novel isn’t over yet

6:01AM | Friday, 5 June

In This Will Destroy That, also known as Book V, Chapter 2 of Notre Dame de Paris, Victor Hugo presents his famous argument that it was the invention of the printing press that destroyed the edifice of the gothic cathedral. Stories, hopes and dreams had once been inscribed in stone and statutory, wrote Hugo. But with the arrival of new printing technologies, literature replaced architecture. Today, “this” may well be destroying “that” again, as the Galaxy of the Internet replaces the Gutenberg Universe. If a book is becoming something that can be downloaded from the app store, texted to your mobile phone, read in 140-character instalments on Twitter, or, indeed, watched on YouTube, what will that do to literature – and particularly Hugo’s favourite literary form, the novel? Click to enlarge Debates about the future of the book are invariably informed by conversations about the death of the novel. But as far as the digital novel is concerned, it often seems we’re in – dare I say it – the analogue phase. The publishing industry mostly focuses on digital technologies as a means for content delivery – that is, on wifi as a replacement for print, ink, and trucks. In terms of fictional works specifically created for a digital environment, publishers are mostly interested in digital shorts or eBook singles. At 10,000 words, these are longer than a short story and shorter than a printed novel, which, in every other respect, they continue to resemble. Digital editions of classic novels are also common. Some, such as the Random House edition of Anthony Burgess’s A Clockwork Orange (1962), available from the App store, are innovatively designed, bringing the novel into dialogue with an encyclopaedic array of archival materials, including Burgess’ annotated manuscript, old book covers, videos and photographs. Also in this category is Faber’s digital edition of John Buchan’s 39 Steps (2013), in which the text unfolds within a digital landscape that you can actually explore, albeit to a limited degree, by opening a newspaper, or reading a letter. But there is a strong sense in which novels of this sort, transplanted into what are essentially gaming-style environments for which the novel form was not designed, can be experienced as deeply frustrating. This is because the novel, and novel reading, is supported by a particular kind of consciousness that Marshall McLuhan memorably called the “Gutenberg mind”. Novels are linear and sequential, and post-print culture is interactive and multidimensional. Novels draw the mind into deeply imagined worlds, digital culture draws the mind outward, assembling its stories in the interstices of a globally networked culture. For the novel to become digital, writers and publishers need to think about digital media as something more than just an alternative publishing vehicle for the same old thing. The fact of being digital must eventually change the shape of the novel, and transform the language. Click to enlarge Far from destroying literature, or the novel genre, digital experimentation can be understood as perfectly in keeping with the history of the novel form. There have been novels in letters, novels in pictures, novels in poetry, and novels which, like Robinson Crusoe(1719), so successfully claimed to be factual accounts of actual events that they were reported in the contemporary papers as a news story. It is in the nature of the novel to constantly outrun the attempt to pin it down. So too, technology has always transformed the novel. Take Dickens, for example, whose books were shaped by the logic of the industrial printing press and the monthly and weekly serial – comprising a long series of episodes strung together with a cliffhanger to mark the end of each instalment. So what does digital media do differently? Most obviously, digital technology is multimodal. It combines text, pictures, movement and sound. But this does not pose much of a conceptual challenge for writers, thanks, perhaps, to the extensive groundwork already laid by graphic novel. Rather, the biggest challenge that digital technology poses to the novel is the fact that digital media isn’t linear – digital technology is multidimensional, allowing stories to expand, often wildly and unpredictably, in nonlinear patterns. Novelistic narratives - as we currently know them - are sequential, largely predicated on the presence of a single unifying consciousness, and designed to be read across time in the order designated by the author. Last year, this presented a serious problem for many would-be readers of David Mitchell’s short story The Right Sort, a fictional work composed of 280 tweets, sent out in groups of 20, twice a day, for seven days. Frustrated readers complained that they couldn’t catch the tweets - that half the narrative had gone missing in the digital ether. It was basically far more comfortable reading the printed version via the link in the Guardian – where readers found a beautifully turned albeit somewhat conventional short story, whose major concession to its digital environment was that it was composed of short scenic snatches 140-characters long. Another difference between digital and print technologies is that the printed novel encourages private reading, whereas digital readers tend to share their experiences in networked, highly social environments. Click to enlarg  Today, even authors of traditional novels are expected to maintain an online presence for themselves. In order to publish a book, you need a hashtag, a Facebook page, a blog tour, a book trailer on Vimeo or YouTube and a Twitter account. Much was made of the potential for this type of media to supplement a novelistic text when Richard House’s “digitally augmented” thriller The Kills (2013) was long-listed for the Man Booker Prize. But potential exists for this kind of interaction to go beyond merely “augmenting” a novel, to integrating with, and actually expanding it. In the not-too-distant future, digital novels will find themselves expanding horizontally across platforms, and readers may well be finding themselves interacting with, transforming, and even contributing the content. This may well be the moment when the walls of literature (as we know it) come tumbling down. Yet, the scathing critic might do well to remember that Dickens was revolutionary in his day, not only for charting the course of serialisation, thereby making literature popular and accessible, but also for making ordinary people the subject matter of his writing. One novel that gives you a glimpse of what the digital novel might turn out to be is The Silent History (2014), created by Eli Horowitz – best known as an editor at the New York based literary journal McSweeneys – in collaboration with Matthew Derby and Kevin Moffett. Click to enlarge The story is set in the second quarter of the 21st century, when children begin to be born who fail to develop the necessary cognitive functions to acquire, understand or use language. The prose is dazzling, the characters, and their predicaments are moving, and just as importantly, the digital aspects of the book are set deeply in its design. They are not only present in its themes – though these aptly deal with the problem of communication – but also in its collaborative structure, and interactive details. The Silent History is available in print and ink, but it was originally developed as an app. The written sections of the text – called “Testimonies” – which contain the main trajectory of the story, were uploaded sequentially, along with a variety of mixed-media elements, including video and photographs. One of the striking aspects of the work is its capacity to grow through user-generated content. The writers gradually expanded the “Testimonies” through the inclusion of “Field Reports” – that is, short narratives submitted by readers and other writers. These can only be unlocked using the map on your mobile or tablet device at a specific location – a bit like a GPS-activated and endlessly proliferating instalment of Dickens. The digital novel wasn’t on show at the Sydney Writers’ Festival last week, but there are clear signs that this counter cultural curiosity is edging its way into the literary mainstream. But the wary can rest assured. Despite Hugo’s protestations, architecture wasn’t destroyed by the printing press. It was only transformed. So too, the novel isn’t over yet. This article orignially appeared on The Conversation.

How to motivate your staff: Top tips from Daniel Pink

6:14PM | Monday, 1 June

The businesses that succeed in the future will be the ones that fundamentally shift the way they think about motivating their employees, according to best-selling author and speaker Daniel Pink.   Pink is renowned for his research and writing about business and human behaviour. He is the author of numerous books, including To Sell is Human: The Surprising Truth About Moving Others, Drive: The Surprising Truth About What Motivates Us, and Free Agent Nation: The Future of Working for Yourself.   Pink has been named one of the top 15 business thinkers in the world and he was on hand to give one of the keynote addresses at the recent Future of Work conference held by the University of Melbourne’s Centre for Workplace Leadership.   Via a video link-up from Canada, Pink walked the conference attendees through his formula for re-thinking motivation and how to start making changes to their employee’s motivation levels.   As Pink describes it, the future of work is about building businesses that “foster both humanity and productivity”.   The science of motivation   “Motivation is a dangerous topic in many ways,” Pink told the audience.   “A lot of you have probably been at conferences where somebody comes up to talk about motivation and what you hear is a lot of gobbledegook.”   “You hear ‘believe in yourself’, ‘you can do it’. They might tell you about some great athletic triumph they have had or scored the winning goal in a soccer game or the winning point in an Australian rules football match, or they have climbed a mountain or something like that to try to pump you up and inspire you.”   But Pink believes there is much more to learn by considering what science can tell us about what motivates people.   “This is central if we are talking about the future of work because the future or work is not only technology,” Pink said.   “It’s not only the configuration of organisations, it’s not only the talent supply chains and all those kind of things, although they are important.”   “Ultimately what it gets to is who is the human being on that laptop, on that mobile phone, across from that customer? What is making him or her tick? If we get that wrong, we’re going to have a very impoverished future of work.”   For Pink, social science research from the past 50 years is a treasure trove of valuable information about what motivates people, starting with the different types of knowledge we all possess.   “You can think of it this way: there is explicit knowledge and implicit knowledge,” he said.   Explicit knowledge is something we know but also something we know that we know. But implicit knowledge captures those situations when “we know stuff but we don’t necessarily know that we know it”.   As Pink sees it, “much of our knowledge about motivation is implicit knowledge”.   “We never say the rules out loud but most people’s knowledge of motivation goes like this, especially when it comes to workplaces: when you reward behaviour, you typically get more of it. And when you punish behaviour, you typically get less of it”.   “We never say those things explicitly or bring to the surface and announce what we know, but we do know them. And more than that, we act on that knowledge every single day.”   If/then rewards work … but only sometimes   These implicit ideas have given rise to the prevalence of what Pink calls “if/then rewards” in businesses, also called controlling contingent rewards, or the idea that if you do x, it will lead to y. But Pink said the other part of the equation is that this reward system only works under certain conditions.   “When you reward behaviour, you do get more of it, sometimes. When you punish behaviour, you do get less of it, sometimes. But not all the time.”   “And here’s the kicker, it’s not nearly as much as we think.”   Pink explained one piece of academic research that illustrates why this stick-or-carrot system does not always result in increased motivation.   The study, conducted by four US economists, asked individuals in three groups to complete a series of physical and cognitive tasks with different degrees of complexity. The top performers in the first group were told they would get a small financial reward, while the top performers in the second group were told they would get 10 times what the first group did and those who performed well in the third group were told they would get 10 times what group two’s top performers did.   Pink said most people would assume group three would outperform group two and group two would outperform group one, based on the fact that they stood to gain the most financially, and this is what happened when the tasks involved only mechanical skills.   “But here’s where it gets more interesting … once the task called for even rudimentary cognitive skill, a larger reward led to poorer performance,” Pink said.   But while Pink said this “seems completely upside down”, he said it explains why “if/then rewards” are great for short-term tasks or those that follow an algorithm or recipe.   “They are extraordinarily effective … [because] we love rewards,” he said.   “Rewards get our attention but they get our attention in a particular way.”   However, Pink said “the same research tells us that ‘if/then rewards’ are far less effective for complex and long-term tasks.”   “Let’s say you’re solving a murky problem—finding a new line of business, solving a problem or doing creative or conceptual work—do you want to tackle a creative problem [in the same way]? No, you want to tackle a creative problem with a much more expansive view,” he said.   For Pink, this type of reward system, which he says has become a “mainstay” in modern business, is at odds with the nature of work in the 21st century, which is increasingly complex and non-rudimentary.   But many business leaders and managers continue to apply the same system to “everything instead of the area where we know that they work”.   “What we need to do is use ‘if/then rewards’ when they are effective but follow a different approach where they are not effective,” he said.   Re-thinking motivation starts with money   Shifting the way we think about motivation away from the assumptions of “if/then rewards”—or “re-calibrating” as Pink puts it—starts with assumptions about money.   “Money is a motivator,” Pink said.   “Money matters a lot. Money matters a heck of a lot. It just matters in a slightly more nuanced way than we think.”   Pink said financial rewards is one area in which businesses often “go over the rails” as the assumption is that money is a “full-proof motivator”. But in fact, fairness can matter much more. “Human beings in general, but human beings in workplaces in particular, are exquisitely in tune to the notion of fairness,” Pink said.   “So how do people evaluate fairness? Well if two people do the same kind of work, have the same level of experience, have comparable levels of contribution, and one is getting paid a lot more than the other, you’ve demotivated the other person.”   Like “if/then rewards”, Pink said money works well as a motivator for mechanical tasks—“you pay people per envelope you want stuffed, you’ll get more envelopes stuffed”—but it loses its effectiveness for non-mechanical or more complex tasks.   “What do you want people thinking about? You want them thinking about the work. And one way to get them to think about the work is to not have them think about the money,” Pink said.   “The best use of money as a motivator is to pay people fairly, pay people well; indeed, pay people enough to take the issue of money off the table.”   The three great motivators: purpose, mastery and autonomy   Once money is “off the table”, Pink said there are “three key motivators for enduring performance on these more complex tasks”.   They are purpose, mastery and autonomy.   Purpose   According to Pink’s formula, there are two types of purpose. Purpose with a capital P is about “making a difference” in the world, while purpose with a lower case p is about “making a contribution. Both types of purpose can be motivating.   “People want to either make a difference, or contribution, or both,” Pink said.   “And so when you’re looking to motivate yourself or looking to motivate other people, you need to be able to answer this question on their behalf: am I making a difference, but also am I making a contribution”.   Pink presented two other pieces of academic research which show how this works in practice. In the first study, the level of motivation of call centre workers at a university was increased when they were asked to read a letter from students who had benefited from the scholarships from the university. In the other study, the quality of food produced by cooks at a particular restaurant was improved when the cooks could see the people eating the food they had prepared.   What does this mean in your workplace?   “Next week, have two fewer conversations about how and two more about why,” Pink told the business owners and managers in the room.   “When you start saying to your employee ‘here’s how’, stop yourself and turn it into a why conversation. Say ‘here’s why’ and you’ll see an uptick in performance.”   “The other idea I have for you is basically tear down the wall,” Pink said.   “See what you can do to tear down the wall between individual people inside a firm and the ultimate customer … [if] you can show people the customer, the end user, that itself is a very inexpensive way to boost motivation.”   Mastery   “Getting better at stuff” or mastery is also inherently motivating, said Pink.   “We like to make progress. But there is a problem. The problem inside an organisation is that progress depends on feedback. The only way you know you are making progress on anything is if you are getting information on how you are doing.”   “But I want to make the case to you that the workplace, especially in large organisations, is a feedback desert.”   Pink said many businesses still rely on annual performance reviews as the only method of formal feedback for employees but he suggests more frequent and more informal catch-ups with staff to improve a business’ “feedback metabolism”.   He even gave the example of a practice pioneered by Australian tech success story Atlassian, called “weekly one-on-ones with a twist”. Managers catch up with their employees each week but one meeting a month is dedicated to a different topic, from what the employee loves about their job – and what they loathe – to what kinds of barriers they face in their position.   “What we have to get away from is feedback systems that are sluggish and formal to feedback systems that are like everything else, that are actually pretty swift and informal,” he said.   Autonomy   The last part of the equation is autonomy.   “To understand autonomy, you have to understand this one word: management,” Pink said.   “We take this word too seriously. We think management has always been here, that it was given to us by god, or came from nature.”   “But here’s another way to think about management: management as a technology. It’s a technology for organising people into productive capacities.”   Pink said management is a “great technology for getting compliance” but businesses of the future don’t need more compliance, they will need engagement.   “We’re trying to manage people into engagement but people don’t engage when they are being managed,” he said.   “The technology for engagement is self-direction. That’s how people engage.”   Pink finished his keynote by sharing examples of businesses and organisations that are taking this idea of autonomy and seeing impressive results. Atlassian was again mentioned for its “ship it days”, where employees are given 24 hours to work on their own projects that they then present to the team; as was the South Australian government, which allows employees to nominate a ’90 day project’ to work on for three months. The result, according to Pink, is a whole host of “local government innovations” that would not have happened otherwise.   “If you give people these little windows of autonomy, you can do some really great things,” he said.   This article originally appeared on SmartCompany.   Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Adblock Pro is destroying the online ad market

6:53AM | Monday, 1 June

Advertising represents 90% of Google’s $USD 66 billion in annual revenue. What makes this figure especially astounding is the suggestion that the vast majority of web advertising may never actually be seen by end users, and if it is, is largely ineffective.   Google, Facebook, and other online advertising companies, are less worried about the effectiveness of ads and are more concerned that they show the ads to users in order to bill their customers. This business is being made harder however as a result of the rise in the use of ad blocking software in the past 10 years. Ad blocking Ad blocking software basically applies a list of filters to every web page that is loaded and either hides the ad from showing, or blocks the request to the site that hosts the ad. Not all ads are blocked however. Depending on the software, some sites and their ads can be “whitelisted” and allowed to show. Adblock Plus for example, applies a principle that it is only “bad” advertising that should be blocked. Bad ads are those that do not fit a set of criteria that advocates that ads should not be annoying, intrusive, should not distort the page that they are embedded in, and are appropriate for the context. Companies that produce “good” ads can then pay Adblock Plus to have their ads whitelisted.   uBlock, another popular ad blocking software plugin for browsers, simply blocks anything that looks like advertising.   Ad blocking software is now used by 25% of users in the US, increasing to 40% of users in countries like Germany. Native advertising Partly in response to the decline in effectiveness of traditional online ads, advertisers have turned increasingly to native advertising. Native ads have been around for some times. They can span from the placement of products in a TV show or film, through to sponsored articles or posts on social media that are written to look like they are part of the normal content of the site. Sponsored tweets on Twitter and posts on Facebook have become more prevalent and have been shown to be far more effective than banner or search-based ads. However, even though these types of ads are more effective, those on social media at least, are still considered not terribly so.   Here again, users are turning to software to remove native ads. On Facebook, products like Facebook Purity give the user a high degree of flexibility in removing advertising from Facebook on the web.   Advertisers in Germany decided to tackle the problem of ad blocking head on by taking Eyeo, the company behind Adblock Plus, to court. This avenue was shut down, at least in Germany, when the Munich court ruled that ad blocking was indeed legal. The threat to mobile If matters on the web were not bad enough, advertisers face an even bigger problem on mobile. Several telecommunications companies are in the process of deploying ad blocking software to their mobile networks. One technology that they are using is from an Israeli startup called Shine that works at the level of the mobile network to block ads.   From the mobile carrier’s perspective, advertising has been estimated to use anything between 10% and 50% of a mobile phone users bandwidth which ultimately the end user is paying for. Ironically however, the telecommunications companies are possibly seeking part of that income by forcing companies like Google to pay them a share for ads transmitted on their networks. Even if this doesn’t succeed, the carriers can switch on ad blocking to enhance the service for their users or charge them for an ad-free environment. An ad-free world? If ads are successfully blocked in the mobile world, which is seen as becoming the dominant platform for advertising in the future, Google, and others, contend that they will need to start charging for software that has been provided for free but funded through advertising.   From an end user perspective, moving from a free service to a paid one would actually be a good thing because it would make the relationship between the user and companies like Google more explicit. It is hard to argue that it is acceptable to secretly track users and capture their personal details when the user is paying for a service. Whether this model is going to be as financially lucrative for Google as selling ads is yet to be seen.   Advertising, even invasive and disruptive ads ones is not going to disappear overnight. In-app advertising provided on Android and Apple devices is going to be harder to block, even at the mobile carrier level. Advertisers will move increasingly to native formats which are harder to block. In the online landscape where competition is global and there are only a few gatekeepers who control the platform for advertising, it is hard to see how to arrive at a solution that will keep all parties in the equation happy.   David Glance is Director of UWA Centre for Software Practice at University of Western Australia.   This article was originally published on The Conversation. Read the original article.

Mobile ad spending lags behind usage: Six key digital trends from KPCB

5:03PM | Sunday, 31 May

Mobile phone use now accounts for nearly a quarter of the time consumers spend with media, yet mobile accounts for just 8% of ad spending, according to a new report by US Venture Capital firm Kleiner Perkins Caufield & Byers.   The lag between mobile phone use and mobile ad spending was one of the key statistics in KPCB’s 2015 Internet Trends report, which looks at how mobile phones and the internet are fundamentally reshaping how consumers engage with the media.   Here are six major trends identified in the report: 1. Internet and smartphone growth is strong, but slowing The KPCB report shows that while use of the internet and smartphones continues to grow, the growth rate is slowing compared to previous years.   Currently, around 2.8 billion people use the internet globally. However, the number of new people signing up is slowing. The growth rate was just 8% in 2014, compared to 10% in 2013 and 11% in 2012.   Aside from India, where the number of connected users surged 33%, the growth rate for the internet in most major economies is slowing. It was just 7% in China during 2014, 4% in Brazil, 2% in the USA and flat in Japan.   Similarly the number of smartphones globally reached 2.1 billion in 2014. However, again, the growth rate was slowing, increasing by 23% in 2014, compared to 27% in 2013 and a massive 65% in 2012.   While the growth rate has already slowed in the US (9%) and Japan (5%), it remains much faster in China (21%), India (55%) and Brazil (28%). 2. Mobile phones are now used by 5.2 billion people globally One of the standout trends in the report has been the breathtaking growth in mobile phone use over the past 20 years.   According to the report, just 1% of the world’s population – around 80 million people – owned a mobile phone in 1995.   Since then, the number of mobile phone users has surged to 5.2 billion. Just under three quarters (73%) of the humans alive anywhere on Earth now have a mobile phone of some description.   However, of the world’s mobile phone users, just 40% have made the upgrade to a smartphone, while a remarkable 60% still use feature phones. This suggests there is still a lot more growth to come for the smartphone market globally – especially in emerging markets. 3. More people have mobile phones than internet access Compared to the 73% penetration rate for mobile phones, only 39% of the world’s population (around 2.8 billion people) have some form of internet access.   The number of people online is fairly evenly spread between Asia excluding China (28%), China (23%), Europe (19%), the US (10%) and the rest of the world (21%).   However, this represents a huge change from just 20 years ago. Back in 1995, there were only around 35 million internet users globally, with 61% in the US, 22% in Europe, 12% in Asia (excluding China) and just 5% in China and the rest of the world. 4. We’re spending more time online than ever before The figures show we’re spending more time than ever before on the internet – and the main device we now use is the mobile phone.   In 2008, the average adult in the US spent an average of just 2.7 hours a day. Most of that time – 2.2 hours per day, or around 80% of all internet use – was conducted from a desktop PC or a laptop.   By contrast, just 12% of internet time on average (0.3 hours) was spent on a mobile phone, and 9% (0.2) was spent on other connected devices.   By contrast, in 2015, the amount of time the average American spends on the internet has nearly doubled to 5.3 hours per day. Most of that time – 51% of all time spent online – is now from a mobile phone.   Interestingly, while the length of time each day people spend access the internet from a desktop or laptop computer has increased slightly to 2.4 hours, the percentage has fallen to just 42% of the total. 5. Ad spending hasn’t caught up with mobile internet use While the growth in mobile phone use has been epic in recent years, advertising spending hasn’t yet caught up.   The report compared the percentage of time Americans spend each time with each media form, compared to the percentage of advertising money spent on each medium.   The report found the percentage of ad spending on radio, TV and internet were roughly in line with the amount of time consumers spend with those media. For radio, both modal and ad share stands at 11%, TV’s ad share is 41% compared to 37%, while the internet claims 24% modal share and 23% ad share.   One of the big outliers in terms of ad spending is mobile. While mobile now accounts for nearly a quarter of mobile time (24%), it gets just 8% of ad spending.   The other big outlier is print. Today, it accounts for a measly 4% of media consumption time, yet claims a remarkable 18% of media ad spending. 6. The rise of mobile means more time spent on vertical screens With a few notable exceptions, such as the BlackBerry Passport, most smartphone screens are held vertically while in use.   This sets them apart from computer monitors or TVs, which are typically horizontal in their orientation.   The report notes that as recently as 2010, most of the time media consumers spend staring at a screen was spent with a horizontal screen.   However, the explosion of mobile phone use means that consumers in the US now spend 29% of their time with a vertical screen, with the share of horizontal screens falling to 71%.   Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Why half a billion people downloaded Candy Crush

5:48AM | Tuesday, 19 May

On trains. In parks. At traffic lights.   So many of us are buried deep in our phones, gulping down pixels of information and entertainment like a thirsty desert pilgrim gulps down water.   And it seems many of us can’t get enough of one particular aspect of smartphones; mobile gaming. In fact over half a billion people have downloaded one game alone*.   In order to convince us to spend so much of our time playing, game design relies heavily on behavioural psychology and it seems the industry is doing a lot right.   In the UK 46% of internet users now play games on a mobile phone, up from 39% in 2012. In the US the number of smartphone gamers is expected to reach 70% of smartphone users in 2015 (that’s a whopping 116.0 million people), with players on average spending $4.58 a month on games. The industry is projected to reach revenues of $30.3 billion (US) in 2015, surpassing traditional console gaming, is huge, growing and a very interesting case study on influencing behaviour.   So what are a few of the techniques they’re using to acquire and retain users? Effort vs. Reward equation Before we dive in, remember that behaviour boils down to what I call the “Effort vs. Reward equation”.   When Effort exceeds Reward, behaviour doesn’t happen.   When Reward exceeds Effort, it does.   In other words, is all the stuff I have to outlay in this decision (time, money, status, effort) less than the payoff I expect?   So what are a few of the techniques game designers are using in make R > E? Free and freely available Getting people to download your game is make or break for game designers, so to reduce “Effort” most are free or have free versions – no money on the line means no risk.   Having the games freely available in the iTunes and Google Play stores is also vitally important because it means users don’t have to go out of their way to find them.  Nirvana for a game designer is of course having it pre-loaded on the phone so there’s not even a download step required. This reminds me of the old Coca-Cola vision of being “in arm’s reach of desire”. Be where people are already. Positive and negative tension I often talk with my clients about the use of positive and negative tension when creating pitches, presentations, websites and campaigns.   Negative tension is the anxiety people feel about doing business with you.   Positive tension is the anxiety people feel if they don’t do business with you.   Let’s look at a couple of examples from the world of mobile gaming. Image A on the left uses negative tension (Loss Aversion) in a couple of ways. Most obviously, telling you that you didn’t get to meet Cinderella. In other words, you’ve failed on what you set out to accomplish. Accompanying this statement, a sad mouse face that looks you right in the eye to dial up the feeling of disappointment. Not only are you sad, but this character is too. You’ve let others down. Ouch.   The good news? The positive tension? The dream doesn’t have to be over! You can still meet Cinderella and it’s as simple as clicking “Continue”.   Image B on the right trades on similar techniques, albeit in a slightly different sequence.   This time instead of starting with negative tension - playing on what you missed out on (Cinderella), it uses positive tension as the lead statement, focusing on the small step to success (‘You only need 3 ingredients”).   The negative tension whammy comes a little later in this example, waiting to hit us with the super combo of “Give Up” button with broken heart icon. Path of least resistance We are inherently lazy creatures, following the path of least resistance most of the time. When in doubt our tendency is to opt for the default setting, the easiest button to press.   Look again at the screens shots. In image A note how the “Continue” button is large and in the very place your eye and finger would naturally travel. The option not to continue? Well, that’s the “X” icon you have to click in the top right of the screen, a long way from where your attention was. (Lots of pop-up and pop-over ads do this too.)   Image B does things a little differently. First it makes the option not to proceed a little easier to find, instead relying on language to make it psychologically harder to click (after all, no one likes to ‘give up’), and second, it makes sure the button to proceed is bigger than the alternative. Key take-aways to apply to your business I could go on for hours about game design, but some central messages for you in your business;   If you are developing an App you need to spend as much time on your behavioral strategy to get the App on people’s phones as you do on the App’s functionality. Just because Apps are something the cool kids are doing doesn’t mean your business needs one. They can be expensive to develop, need constant attention and have a short lifespan (wow, sounds like dating). What will convince your target market to download your App? Is R > E? Think about how you are using positive and negative tension. Too much negative tension without any positive will just bum your customer out. Insufficient negative tension will mean they are too comfortable with leaving things as they are.   P.S. You can read more about the Effort vs Reward equation here and here. *And that game is Candy Crush. Bri Williams runs People Patterns, a consultancy specialising in the application of behavioural economics to everyday business issues.

Huffington Post, TechCrunch and Engadget sold to US telco for $5.5 billion

5:33AM | Wednesday, 13 May

AOL, the parent company of online publications including Huffington Post, TechCrunch and Engadget, is to be sold to US telco giant Verizon for $US4.4 billion ($A5.5 billion).   The deal will see AOL purchased for $US50 per share, funded through cash on hand and commercial paper. In a statement, Verizon cites creating a mobile-first online advertising platform connected to its IoT (Internet of Things) products as being the key motivation behind the deal.   Upon completion of the deal, which is subject to regulatory approval, AOL chief executive Tim Armstrong will continue to oversee operations at the digital media and advertising company, which will become a wholly-owned subsidiary of Verizon. Armstrong raised eyebrows in 2013 by firing an employee in front of 1000 colleagues.   The takeover is far from the first mega-deal involving AOL, which merged with Time Warner in January 2001 in a deal that saw AOL shareholders own 55% of the combined company. However, following declines in AOL’s dial-up internet business over the years that followed, the merger was unwound in May 2009.   Following the spin-off AOL, which originally launched in 1983 as a Commodore 64 BBS (bulletin board service), then transformed itself into an online media company by purchasing TechCrunch in 2010 and Huffington Post for $US315 million in 2011.   Meanwhile, in September 2013, Verizon purchased Vodafone’s stake in mobile phone joint venture Verizon Wireless for $US130 billion, with the mega deal the third-largest in history at the time it was announced.   This article was originally published at SmartCompany.

How children view privacy differently from adults

4:40PM | Thursday, 16 April

Have you seen the how-to video of a teenage girl styling her hair that went disastrously wrong? She was obviously very disturbed by what happened, yet still uploaded the footage onto YouTube. Do you think a 45 or 50 year-old would upload an equivalent video of themselves?   The majority of young people now share lots of things online that many adults question and feel uncomfortable about: their likes, dislikes, personal views, who they’re in a relationship with, where they are, images of themselves and others doing things they should or maybe shouldn’t be doing.   In fact, a study undertaken in the US by Pew Research found that 91% of 12-to-17-year-olds posted selfies online, 24% posted videos of themselves. Another 91% were happy posting their real name, 82% their birthday, 71% where they live and the school they attend, 53% their email address and 20% their mobile phone number. Overstepping Children’s fondness for online sharing is a global phenomenon, and in response governments internationally have initiated awareness campaigns that aim to ensure children are more private online.   In the UK, the National Society for the Prevention of Cruelty to Children recently launched a Share Aware campaign. This includes the recent TV advertisement, called I saw your willy, which depicts the ill-fated consequences of a young boy who as a joke, texts a photo of his penis to his friend.   The ad emphasises to children the need to keep personal information about themselves offline and private.   Similarly the Australian Federal Police have launched Cyber safety and ThinkUKnow presentations for school students, which highlights the social problems that can arise when you’re having fun online.   Adults often interpret children’s constant online sharing to mean that they don’t care about privacy and/or don’t understand the potential longer-term issues. There is some truth to this perspective. But simply labeling children as either disobedient or naïve is too simplistic. There is an important need to understand why children are overstepping adult-defined marks of privacy online. Shifting attitudes In the words of Facebook, our relationship status with privacy can be summed up as: it’s complicated.   Part of the complexity comes down to how privacy is defined. Many adults understand privacy to mean being selective about what one reveals about themselves so as not to reveal too much personal information. We often assume that children will adopt the same conceptualisation, but should we?   Privacy is a fluid notion. Think of Victorian times and the imperative for women to keep their ankles hidden. Part of the reason its definition is shaped and reshaped is due to the changing social environment in which we live. This idea is useful for thinking about why children divulge so much information online.   Children are growing up in public (not private) times, in which people freely and constantly reveal themselves on their screens. This is not solely associated with physical nudity and the stream of semi-clad women that constantly inhabit advertisements, music videos and the like. An environment that idolises nudity certainly contributes to children seeing such behaviour as the norm. Privacy, however, is not just about nudity and sex.   Given the exponential growth of reality shows and social media, children now have unprecedented access to the inner thoughts and personal actions of others. Children are growing up watching real people freely share their deep personal ideas, experiences, opinions and actions. The very purpose of these mediums is to encourage such sharing of information!   Children watch everyday people in the Big Brother house openly discuss their sexual experiences, develop friendships, go to the toilet, get ready after their morning shower and, explain deep personal childhood issues.   Similarly, they watch Survivor and The Bachelor where people can reveal the darker side of their ambitions, world-views and ways of dealing with others. Their revelations are under the guise of competition however they offer subliminal messages about what we can and should share publicly share.   Consistently watching others reveal themselves on screen feeds children’s understanding of what is private information and what isn’t. Its impact is strengthened because children watch these revelations on their personal screen such as their tablet or mobile, which can make it more of an intimate, one to one connection for the child. Generation gap Add to this, the dynamic stage in life young people are at, which is characterised by risk-taking behaviour. This combination results in the understanding that sharing what many adults might consider to be private ideas, is really just part of life.   In previous generations it was assumed that the average person wouldn’t want to give up privacy. But for this generation, giving up privacy for a social life, fame (or infamy for some), easy access to shopping and studying or working from home is the norm.   Children’s penchant for online sharing is a much larger cultural transformation than it’s given credit for. The whole idea of what is private and what is public is being disrupted and reshaped by new screen-driven interests and activities.   There is a need to move away from simply judging and reprimanding for their online sharing habits. There is always a need for safety and awareness campaigns, although it is also important to move beyond older and outmoded views of privacy so that we can actually understand young people’s privacy negotiations.   In this way we might have more of a chance to meaningfully support negotiations that are transparent, equitable and foster children’s well-being.   This article was originally published on The Conversation. Read the original article.

The hazards of presumptive computing

4:55AM | Wednesday, 1 April

Have you ever texted somebody saying how “ducking annoyed” you are at something? Or asked Siri on your iPhone to call your wife, but somehow managed to be connected to your mother-in-law?   If you have, you may have been a victim of a new challenge in computing: that fine line where we trust a computer to make predictions for us despite the fact that it sometimes gets them wrong.   For one hapless administrator with the Australian Immigration department, this level of trust has almost certainly led to major embarrassment (or worse), with it being revealed that during November last year they accidentally sent the personal details of the G20 leaders to the organisers of the Asian Cup Football tournament due to an autofilled e-mail address that went horribly wrong.   We trust the machines, but sometimes the machines let us down. So, what’s happening? Are the machines too dumb to get what we mean? Or are they just getting too smart for their own good? The uncanny valley of computing prediction   It feels like we’re entering an uncanny valley of computer prediction. This is where computers seem almost human, make us start to trust them, but then suddenly make a mistake so galling that we get uneasy that we’ve trusted a machine so completely.   The problem is that it’s all just so convenient. My typing speed has increased immeasurably since I started to trust my iPhone to autocorrect the vague words I type into it and just went with the flow. And services like Google Now that predict the information you want before you even ask for it are even more useful.   But the trade-off is that sometimes it gets it wrong. And sometimes I find that I’ve inadvertently sent the wrong message to my wife, or had the phone make ridiculous suggestions like suggesting that my office is “home” (that went down well with the aforementioned wife!).   So, why is it so hard for a computer to be human? Fool me once, computer …   The challenge of making a computer seem human has been with us for quite a while. Ever since Alan Turing invented his computation machine to break the Enigma code during the second world war, we’ve striven to make a computer that can think like a human and act like a human.   So much so, that we have even derived a test, called the Turing Test, to determine whether a computer can successfully fool somebody into thinking they are human.   In his paper that proposes the Turing Test, Turing suggested that we don’t need to make a computer that can genuinely think – whatever that means – but rather just build a computer simulation for which we can positively answer the question: “can machines do what we (as thinking entities) can do?”, as cognitive scientist Stevan Harnad puts it.   Through a test he called the “imitation game”, a human judge engages in natural language conversations with a human and a machine using a text-only channel. If the judge cannot tell the machine from the human, the machine is said to have passed the test.   Since Turing’s original paper, many variations on the test have been proposed, adding perceptual capabilities like vision and audio, as well as extending the test with robotics.   But so far, no computer has definitively passed the original Turing Test. Every time we come close, they stumble into that uncanny valley, fall short in some way that makes us start to feel uneasy, and then the whole tower of cards falls.   This is not surprising. We are trying to make a machine deal with all the complexity of human processing and it’s bound to make mistakes. A classic example of this is the tank parable by Elieler Yudkowsky. Tanks, but no tanks   To demonstrate the problem of teaching a computer to be human, Yudkowsky describes a situation where US Army researchers train a computer to recognise whether or not a scene has a tank in it. To teach the computer this, the researchers show it many images, some with tanks in them, some without, and tell the computer whether or not each image contains a tank.   Through their testing, they determine that the computer has learnt to identify each scene correctly so they hand the system to the Pentagon, which then says it’s people couldn’t get it to work.   After some head scratching, the researchers discover that the photos of tanks had been taken on cloudy days and the photos without tanks had been taken on sunny days. So rather than learning to see tanks, the system had learnt to spot cloudy or sunny days!   Such are the hazards of teaching a computer a skill when it doesn’t have sufficient context to understand what you want it to do. Teaching a computer to know what we mean, not what we say   So, after my mobile phone helpfully informed me that my workplace was “home” and I adjusted the address accordingly, I noticed my wife was quite quiet on the way home. I looked over at her and asked what was up and she said “nothing, I’m fine”, at which point I knew I was in trouble!   But of course, that’s not what she said. She said she was “fine”, and a computer, without context, would take her at her word. Context is everything, whether it’s dealing with tanks or especially when dealing with a grumpy spouse.   Sometimes context is easy, such as the system Google implemented a couple of years ago that checks if you say the word “attached” in an email and then whether you’ve actually added an attachment, and warns you if you haven’t done both.   But sometimes context is harder, like when you type “Ian” and let it autocomplete, but end up with the wrong “Ian”. After all, how is Gmail supposed to know which Ian you wanted without a host of other knowledge based on the content of your email and what you know about who you’re emailing?   Nonetheless, computers are getting better at it. The iPhone autocomplete now adds “well” without an apostrophe until it detects a few words later that you meant “we’ll” with an apostrophe, at which point it changes it. So it might not be long before it can tell you that you’re e-mailing the wrong “Ian” too.   But for now we still need to be careful, because until computers can understand all the context of what we mean and what we do as humans – and there is no guarantee they ever will – we are still in that uncanny valley of presumptive computing.   This article was originally published at The Conversation.

Paris v Kim: ASX-listed games company takes on Kim Kardashian scoring deal with Paris Hilton

3:48AM | Tuesday, 3 March

A Hong Kong-based ASX-listed company is planning to take on the internet’s biggest name, Kim Kardashian, with a rival celebrity mobile phone game.   Pure-play mobile game developer Animoca Brands, which debuted on the Australian Securities Exchange in January, has signed a deal with celebrity heiress Paris Hilton to license her name and likeness and for a mobile app game. Hilton, the great-granddaughter of Conrad Hilton, the founder of Hilton Hotels, signed the deal for an undisclosed sum.   The announcement has seen the Animoca’s share price jump 36% on opening, currently sitting at 27 cents at the time of publication.   Chief executive Robby Yung told SmartCompany Animoca was very pleased with the result.   "Obviously people seem to be interested in what we’re doing," says Yung.   The game follows the incredibly successful release of celebrity app Kim Kardashian Hollywood, which has being downloaded over 83 million times since its release last June, generating more than $US74.3 million ($A95.8 million) in revenue for its developer Glu Mobile.   The game allows a player to play Kardashian’s “friend”, following her around Hollywood in order to achieve her level of fame. It’s free to download, but users pay real money for “koins” that allow them to make in-app purchases on items like different outfits or events.   Glu Mobile is now planning to capitalise on its success with the release of a new game featuring pop star Katy Perry.   But Animoca will try and poach some of the young female audience away from Glu with the Hilton app, with Young deeming Hilton as "one of the world’s most recognized names".   "One of the ways to stand out on the app store is to align yourselves with brands, and amongst the many types of brands out there, celebrity brands have proven to have tremendous currency," he says.   Although Hung says a specific game has not been yet been determined, he says it is likely the game will work on the same monetisation method as the Kardashian game, which is the "predominant business model in the industry".   Young says he expects many app makers to approach other celebrities for deals.   “I think the Kim Kardashian app from Glu really tapped into the zeitgeist," he says.   "You’ll find if you went and looked at top ten people on Twitter with the most followers, they will be approached for apps."   He rejects the idea that the Paris Hilton game is a "copycat" version of the Kardashian game.   "We haven't even launched the game yet so that is difficult for anyone to say," he says.   Danny Gorog, founder of Australian app developer Outware Mobile, says a flurry of similar apps are often released after the success of one.   “Just look at Flappy Birds,” he says, referring to a game that came out in the wake of the success of popular game Angry Birds.   “Angry Birds really spawned its own genre and I’d say the Kim Kardashian game has now spawned its own genre, and this is the follow up.”   Gorog says games are by far the most popular genre on Apple’s app store, but there is a “steep curve” between the most popular games and those that are also bidding for success.   “The difference between number one and number 100 on the app store is huge,” says Gorog.   “It’s a very hit-driven business. Kim Kardashian may be number one on the charts now, but where will it be in a year’s time? Look at Angry Birds, it’s completely dropped off [the charts].”   Animoca, which was only established last year, also lists other game titles including Doraemon, Ultraman and Garfield. Hung says Doraemon, a Japanese cartoon cat, has proven to be one of the company's biggest successes.   "Paris Hilton has a very big brand name in a variety of countries, but I wouldn't put too much on consideration into that compared to some of our other products," he adds.   This story originally appeared on SmartCompany.

Five social media startups to watch in 2015

2:58PM | Monday, 23 February

Hardly a week goes by without a new social media platform launching, with one Melbourne founder suggesting social media startups need to carve out a niche if they are to survive in such a highly competitive environment.   With this in mind, StartupSmart rounded up five Australian social media startups we think you should keep an eye on in 2015.   1. Nabo   Sydney-based startup Nabo is a social networking platform for neighbours. The idea is to bring people in the same suburb together online so they can organise play dates for their children or even have their goldfish babysat while they’re on holiday. The startup has secured $2.25 million in funding from Seven West Media and Reinventure Group, and launched nationally in December last year.   2. Vent   Complaints have always thrived online. However, a Melbourne startup is looking to collate them in the one place and allow people to be supported by others if they need to get something off their chest and don’t want to pollute their Facebook or Twitter feeds. Vent has grown into a community of more than 10,000 users and to date the startup has raised $100,000 in funding.   3. TalkLife   Talk Life is a mobile phone app and social network designed to host important conversations about mental health that people might not necessarily wish to talk about on a general platform or share with family and friends. The Adelaide startup’s founder, Jamie Druitt, has received support from London’s Bethnal Green Ventures and is collaborating with Microsoft Research and the Massachusetts Institute of Technology in order to analyse real-time user data to predict high-risk mental health episodes in young people.   4. Mothers Groupie   Social network Mothers Groupie aims to reduce the isolation felt by young mothers by helping them meet face-to-face and talk to each other online. The site allows people to create or join groups based on a location – such as Melbourne’s inner northern suburbs – as well as other factors such as “young mums” or “new mums”. The platform also features a directory of “helpers” such as babysitters, cleaners and sleep consultants. Mothers can also post their own job ads and the startup is looking to expand into the US.   5. Mineler   Mineler is a professional social network for people who work in the mining industry. Based in Perth, the platform allows people to bid for work and contracts as well as connect with others in the mining industry. The startup currently has more than 60,000 members as far away as Colombia and Canada, and has raised $500,000 in seed funding.   Follow StartupSmart on Facebook, Twitter and LinkedIn.

What Australia’s economy will look like in 2050: report

2:48AM | Wednesday, 11 February

The Australian economy will drop out of the G20 by 2050, according to research published today by PwC.   And the slide will continue unless the nation’s leaders fundamentally change the way they think about investing in science, technology, engineering and mathematics (STEM) skills, says one of the country’s leading technology entrepreneurs, Matt Barrie.   The PwC The World in 2050 report predicts the Australian economy will drop 10 places in world rankings by the middle of the century, dropping from its current rank of 19 to 29.   This would put the Australian economy behind the likes of growing economies such as Bangladesh, Pakistan and the Philippines and far behind economic powerhouses China, India and the US, which are predicted to stay at the top of the rankings over the next 35 years.   According to PwC, the end of the Australian mining boom, and a lack of investment in other parts of the economy, will cause the Australian economy to fall behind.   The PwC rankings are determined by comparing the purchasing power parity of each economy and this year’s result shows a broad shift from developed economies to emerging economies.   While China is predicted to remain the largest economy by 2050, India is expected to overtake the US for second place, and Indonesia, Mexico and Nigeria could push the UK and France out of the top 10.   The Philippines, Vietnam and Malaysia are expected to shoot up the rankings, while Colombia and Poland will grow at a faster rate than the large economies of Brazil and Russia.   Read more: STEM is critical for Australia’s economy   PwC Australia economics partner Jeremy Thorpe told SmartCompany the research indicates the Australian economy will “revert to trend” and we “won’t see the mining boom in the same way”.   While Thorpe says PwC is not trying to predict exactly what the Australian economy will look like, the takeaway from the research is that “we know the economy is going to be different and STEM will be important wherever it goes”.   “The Australian economy is not going to be as large in relative terms and so our companies are not going to be competing on scale,” Thorpe says.   “They will be competing at the smarter end.”   Thorpe says this represents an opportunity for “smaller, nimble companies”, especially those built on digital disruptive technology, and that is why it is essential to make long-term investments in STEM capabilities.   “Many of these things don’t pay off immediately,” says Thorpe.   “You can’t cut the ribbon in the same way you can for a new bridge, you have to look beyond the political cycle. But as the events of the past week have shown, it can be heard to divert attention from the here and now.”   Freelancer founder Matt Barrie agrees with Thorpe’s analysis, telling SmartCompany he fears the Australian economy will follow the same path as the resource-rich Argentina, which saw a dramatic decline in its wealth because of government policies that did not alter the composition of the economy.   Barrie has spoken out regularly about the need for investment in STEM skills in Australia, including to Communications Minister Malcolm Turnbull last year.   “We have actually gone backwards in our thinking about the tech industry or science,” says Barrie, who lays the blame with “successive Australian governments”.   “There have been rampant cuts. I don’t think it would be possible to do more damage.”   Barrie points to cuts to funding for science research, declining university enrolments in STEM subjects and courses, the “dumbing down” of curriculum in primary and secondary schools, as well as the “screwing up” of remuneration schemes for technology companies, as just some of what he believes are damaging policies.   “It just goes on and on and on,” he says. “We’re at risk of becoming a shipwreck … It’s death by a thousand cuts.”   If given the chance, Barrie says the first thing he would change is the K-12 curriculum taught in Australian schools.   “Every little kid wants to design the next Facebook … the next mobile phone app but they don’t know how,” he says.   “We need to help them connect the dots.”   He would also encourage more people to work in STEM fields and appoint a national chief technology officer who would be responsible for setting longer-term goals.   But as long as the topic remains off the table in Canberra, Barrie says he “is at a loss”.   “There is fundamentally something wrong in the way our country is governed,” he says.   This article originally appeared at SmartCompany.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

With HoloLens, the future of reality is augmented

2:05AM | Friday, 6 February

Prepare to open your wallets, ladies and gentlemen: Microsoft has announced the release of an augmented reality (AR) headset called HoloLens.   Although having been announced only a fortnight ago, tech media are already dreaming feverishly of the potential applications for such a device. Meanwhile, Microsoft’s own press images seem to promise a benign utopia replete with living room Minecraft games and attractive, tech-savvy white people.   It might look rather like an excitingly chunky pair of wrap-around sunglasses, but Microsoft promises that it is entirely self-contained, with speakers, lens and CPU housed entirely within the chassis.   Being substantially smaller than the Oculus Rift, and intended to be less invasive than Google’s much-maligned Glass, HoloLens provides perhaps the most credible attempt to introduce augmented reality into our homes.   What is augmented reality anyway?   Even though HoloLens is new, the idea of augmented reality is certainly not. First proposed by L. Frank Baum (author of the Wizard of Oz) in his 1901 novel The Master Key, AR has a long and illustrious history, at least in theory. But what is it, exactly?   Augmented reality is a form of “mediated perception”: an AR device overlays a virtual world on the real one. It does this by taking a live video feed of the external world and then supplementing it with computer-generated sensory input. In this sense, it is unlike virtual reality, which entirely replaces the external world with a virtual one. Instead, AR embellishes the real world to make it more fun, clear or informative.   Already there is a huge number of AR apps, most of which have been built for mobile devices. One such is Layar, which synchronises camera input with data from Google Maps in order to let you see the world with the digital marginalia built in.   Meanwhile, games like Ingress enable smartphone users to win points and compete with other players by “hacking” with virtual nodes that are attached to real-world landmarks. In order to hack the node, they need to actually stand near the landmark, giving it a distinctly physical dimension.   Some of these AR programs are genuinely remarkable, and carry with them the seeds of a changing paradigm. The new version of Google Translate, for example, can perform real-time spoken-word translation, as well as translating written words almost instantaneously. The Babel fish from The Hitchhiker’s Guide to the Galaxy gets one step closer.   However, HoloLens provides you with something a little different. Unlike mobile phone apps and Google Glass, the sensory overlays provided by HoloLens are interactable digital objects. They don’t merely provide parenthetical or additional information about things in the world; instead, those objects serve to further populate the world around us.     The futures of where we live and work   If you’re interested in Microsoft’s vision of a future filled with digital objects then check out its publicity video: the promises may seem overwhelming (one might even say “unbelieveable”). According to Microsoft, with a liberal application of augmented reality, the home and office become deeply and richly interactive in a way that has been hitherto impossible.   Objects are designed and tested on the fly. A single pinched finger or flicked wrist enlarges or crumples or dismisses instantly and organically. Meanwhile, entire classes of consumer items are rendered completely redundant, collapsed into HoloLens and products like it.   If Microsoft realises the future it proposes, AR products will see televisions, gaming systems, music players, desktop computers and even cosmetic objects like wall art and decorative carpets consigned to the dustbin of history. We will no longer have a need for these physical objects; instead, they will be projected into our field of perception as strictly digital entities.   To be clear: this is not a prediction on my part. Far be it from me to make predictions about the purchasing habits of other people (I can barely keep track of my own). This, however, appears to be Microsoft’s vision. Although varnished into a high gloss, what is perhaps most telling about its vision of the future is not what isnew, but more what is missing. Televisions, gaming consoles, paintings: these objects have no place in Microsoft’s future. The future of consumption   If this outcome is realised, it seems obvious to expect radical shifts in manufacturing and other secondary industries. Already there have been murmurs of concern and excitement about the possible economic ramifications of cheap 3D printing, particularly with regards to what it means for the manufacture of simple items.   If Microsoft is successful in killing off the television and other media apparatuses, we may well also see manufacturing shrink from the other end. If the HoloLens and products like it do supplant other consumer electronics, we will be witness to the slow, awful collapse of what is currently a highly speciated consumer landscape into a homogeneous field of kooky-looking headsets.   However, even as manufacturing will be forced to grapple with greater decentralisation and greater automation, not to mention potential market erosion at both the top and bottom ends of the sector, those who create, market and sell entertainment content will almost certainly find AR technologies an enormous boon.   Marketers and content producers will have a window into our habits and tastes, sharing material funded by consumer commitment to almost-negligible microtransactions.   Meanwhile, consumers will almost certainly fight back. Already a great many web browsers come installed with native ad and pop-up blockers. Moreover, having tired of product placement in movies and the indignity of “advertorials” in magazines, several designers and developers are working on products that block ads in the real world. It seems fair to expect that this presages a subtle, extremely sophisticated arms race between those who consume content and those who produce it.   The sky is not falling   These developments might sound like the first stages of a miserable Gibsonian future, but there’s no need to panic just yet.   Indeed, there’s plenty of reason to think that the widespread uptake of AR technologies will, on the whole, make life easier. However, just like the internet dramatically changed the way business is conducted (such as the slow demise of brick-and-mortar bookstores), it seems reasonable to expect that the spread of AR technologies will introduce its own difficulties. But we should not take our own predictions too seriously.   After all, we’ve been privy to these kinds of statements before. People who visited Norman Bel Geddes' Futurama exhibit at the New York World’s Fair in 1939 were awarded a souvenir pin upon departure: “I have seen the future”.   Now, with the benefit of hindsight, we can look back upon this claim, along with some of the more outlandish predictions made by our forebears (flying cars, robot servants, Jetsonian spaceflight, post-scarcity economics) with an indulgent smirk: chrome, finned cars and bakelite have swung seamlessly into fashionably kitsch; ill-fated relics of a future that never was.   There is, of course, a lesson here. In our own era of social networking, genetically modified foodstuffs and tawdry capitalism, it behoves us to be a little cynical about the promises and drawbacks of an augmented reality, in much the same way that we mock the promises of the Atomic Era.   Of course, I’d be lying if I said I weren’t excited. I am, and very much. There’s too much of the techno-utopian in me not to feel a subtle thrill that the future we’ve been promised is finally arriving.   But nonetheless, I worry, and I reserve judgement. Not very much, granted, and not very loudly, but amid the excitement there is a subtle sense of disquiet, and the sly hint of a disdainful sneer.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.   This article was originally published on The Conversation. Read the original article.

Startups to receive jumpstart from NRMA’s first accelerator program

1:34AM | Tuesday, 20 January

The National Roads and Motorists’ Association (NRMA) has announced the startups taking part in the company’s first accelerator program.   Six startups have been chosen for the Jumpstart program, an initiative looking to help startups scale by tapping into the NRMA’s 2.4 million customers.   The startups include:   Careseekers; a national service that connects people needing in-home carers with people looking for carer work. WunderWalk; is a personalised pocket tour guide. The app creates your ultimate outing, shopping, eating, drinking, entertainment and sightseeing anywhere in the world. Gamurs; a gaming social network. Gamers connect with others and personalise their gaming experience. Camplify; connects would be campers with owners of caravans and RVs, holiday parks, camping grounds and camping related experiences. Their goal is to be the ‘airbnb’ of the camping world. OTTO by Gizmosis; advanced voice recognition technology to reduce driver distraction. Makes and receives mobile phone calls and texts without touching the phone. Hive UAV; provides automated remote aerial monitoring using drones (UAV's) for agriculture, emergency services and industry.   Participants receive a $30,000 grant, a free co-working space and take part in a six-month mentorship program in exchange for a 10% stake.   NRMA Group chief executive Tony Stuart said in a statement the startups chosen to take part in the program have a steep learning curve ahead of them.   “Each business group is given the opportunity to learn from the best,” he says.   “They will receive business advice, training and mentoring from recognised experts in the startup world.”   Stuart also pointed out that the program’s participants are from all across Australia and included companies founded by women.   “We are thrilled with the diversity of our Jumpstart participants and have selected teams from Sydney, Melbourne and Brisbane,” he says.   “Participants are of all ages with two of the businesses established by young women.”   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Zombie technology: From Palm to Yahoo, the companies that refuse to die

1:28AM | Monday, 19 January

When technology, and the companies behind it, fails, the end can come in a number of different ways.   A technology can be mercifully put down, as with Google’s failed hardware media player, the Nexus Q. Alternatively, a failing company can be bought and shut down, as in the case of the once famous personal digital assistant maker Palm, who were bought, and then shut down by HP.   Failing companies can also enter a more indeterminate, zombie state where the company may still earn enough money to stay open, but the company itself, and the products they produce, will never again be a significant force in the technology landscape. Recognising a zombie company Recognising zombie companies and technology is relatively easy. Companies with a languishing share price that shareholders are clearly only holding onto because they hope the company will be bought, is one clear indicator. Blackberry’s shares for example, popped 30% on the rumour that Samsung was about to buy the beleagured mobile phone company. The shares crashed back to their original value after the company denied the reports. Twitter and Yahoo also both benefited from the suggestion by ex CEO Ross Levinsohn that they should merge.   The fact that the market should respond to these types of rumours are clear signs that the companies have exhausted the option of developing their own products to continue making them relevant or competing against the market leaders. Discussions about the death of a company or technology Another indicator of a zombie company are the number of discussions that occur about whether the company/technology is actually dead or whether it will see a resurrection. This is being played out right now after Google’s announcement that its much maligned smart glasses were being pulled from public sale. Commentators are divided as to whether this signifies the complete death of the product or merely a pause before some form of re-launch. Google Glass has become a zombie product because even if it does survive, it will never have anything other than marginal interest.   In another case, reviews of BlackBerry’s latest phone, the Passport have tried to imply that this will somehow reverse its fortunes. Others propose growth for the company through services rather than hardware.   The key thing for zombie companies however is not to confuse the ability to stay in business with the fact that the business is actually viable. In the UK in 2013 for example, there were approximately 160,000 companies that were capable of staying afloat because they could pay the interest on their loans but had no way of ever being able to pay back the actual loans themselves.   Companies like Twitter for example, who are as yet to make a profit from anything other than the selling of their shares, can keep going on their IPO proceeds and by convincing people to invest further on the basis that they will eventually make money. The interesting thing with Twitter is that there is the belief that it can still make money somehow, with the right management. There are increasing calls for the CEO Dick Costolo to resign even though it may simply be that there is no viable way for Twitter to make enough money from its social network. Zombie technologies Zombie technologies pose a greater problem than zombie companies because it covers everyone involved in that technology. Zombie technologies are interesting because they often result from over-hyped expectations about their significance leading to a gold-rush surge of companies trying to catch the early wave of expectation.   Massive Open Online Courses (MOOCs) for example, were going to transform the higher education sector by offering high-quality, free, online courses to the world. Companies like Coursera are still going only because of the large amounts of money that they have raised from venture capitalists. Unfortunately, the higher education industry proved resilient to change and Coursera’s attempts to make money out of ongoing professional education is never going to realise the ambitions of their investors. The same outcome is true for other MOOC companies like Udacity and edX.   Another topical zombie technology are crypto-currencies like Bitcoin. Bitcoin’s 80% fall in value since its peak in the past year has cemented its general failure to gain acceptance by governments, the financial sector and the public at large. This doesn’t mean the end of Bitcoin as there will be fringe uses for this technology supported by a core group of loyal fans. Its zombie state however will continue to be confused with a technology simply waiting for the right market opportunity to become the basis for the world’s future digital economy.   Zombie companies present a real problem in that they lock in funds, and employees who could otherwise be working more productively within their own startups or other companies.   Of course, eventually companies will stop trading, or be bought for their remaining assets, but that time may be surprisingly far into the future. This article was originally published on The Conversation. Read the original article.

Health tech startup uses wearables to help seniors have more mobility

1:17PM | Sunday, 18 January

Two-year-old Sydney health tech startup mCareWatch is attracting interest in overseas markets with a smartwatch and platform that can help carers to monitor seniors both inside and outside the home.   mCareWatch’s core product is a waterproof smartwatch that bundles a mobile phone with its own SIM with GPS and Wi-Fi to track location. Unlike other pendant and personal alarm systems, such as sensor platform Curo, the standalone smartwatch works outside the home.   Aside from telling the time, the watch has an SOS emergency notification button and can make calls to one of three numbers in an emergency. It can also be set to send a ‘geo-fence’ notification when the wearer moves beyond a particular distance from their home, which can be used to alert carers to a lost dementia patient.   Carers, be they a family member or a staff member at an aged care facility, can remotely monitor patients either through an app (available for iOS and iPhone), or through a cloud-based web dashboard.   The system was created by brothers Paul Apostolis and Peter Apostolopoulos after a health scare involving their elderly father, and comes as the health tech innovation community is booming across Adelaide, Brisbane and Melbourne.   Apostolopoulos told Private Media the device is aimed at providing extra mobility for wearers inside and out, allowing them to visit friends, go shopping or have an evening walk with peace of mind.   “We launched the first generation about two years ago. We knew we wanted to enter quickly to test the market and get feedback,” Apostolopoulos says. “When we first launched, the service focused around the mobile app and was originally around the consumer being able to monitor mum and dad remotely.   “We launched the second generation with new features like Wi-Fi and an improved charging mechanism, and we took it into aged care providers with a software solution… Our customers now include Able Care, St Vincent Care and Bankstown City Aged Care, which introduced it as part of their independent living package.”   The product is also gaining a significant amount of interest across South East Asia, appointing a distributor in Malaysia last year and recently opening an office in Indonesia.   The company plans to introduce a ‘third generation’ version of the product sometime around April or May. It has also identified other potential verticals as possible markets, such as lone workers, employees in the logistics industry, and the security industry.   “The next generation will take it more into the mobile health space. It will allow the watch to be a community hub. You will be able to connect biometric measures throughout the home to it, and it will send that information to a cloud platform and then package it for the appropriate person.”   Apostolopoulos says that, especially with an aging population, health tech is set for growth over the coming years.   “Health tech is definitely a growth area and that’s based on wearables. Preventative care is important as people stay at home more and you need technology to monitor patient’s health in a way that’s efficient.   “Preventative care is an important issue because you can provide an intervention before someone winds up in a hospital, which means more spending by governments.”   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Most Australians would rather give up their TV than their smartphone: Survey

12:38PM | Sunday, 7 December

A majority of Australians would rather go without television than their smartphone, while the number of users using their mobile phone for just calls and SMS has fallen to just 9%, according to a new survey.   The Australian Mobile Phone Lifestyle Index was conducted by the Australian Interactive Media Industry Association and conducted by strategic research analysts Complete the Picture.   It combined a survey of 1459 respondents with Australian Bureau of Statistic Census demographic data and socioeconomic status data from the Household Expenditure Survey.   The figures show that 61% of Australians would rather go without television than their mobile phone, while 50% would rather go without their PC or tablet than their mobile. Meanwhile, 30% would rather do without their car than their mobile phone.   Meanwhile, a little over one-third of Australians (34%) have already ditched the landline in favour of just their mobile phone, with a further 48% saying that while they still have a landline connected, they use it rarely.   Another key finding is that growth in the Australian smartphone market is close to a saturation point. Around 89% of Australians now owning a smartphone, up slightly from 88% last year, and up significantly from 67% in 2011.   “The recorded ownership figures will also vary depending on whether it is being measured as a percentage of the overall number of mobile phone subscriptions in Australia (higher than the total number of Australians) or as a percentage of all Australians or just adult Australians,” the report cautions.   The ownership rate for smartphones (88%) are now higher than for either computers (88%) or tablets (60%), with 53% owning all three devices. However, if given the choice between the three devices, 50% would choose their mobile phone, 34% would pick their computer, and just 16% would pick their tablet.   Despite this, when it comes to buying online, many still opt for their desktop or laptop. Around 90% of PC owners have used their computers to make a purchase and 75% of tablet owners have used a tablet to buy a product. In contrast, the percentage of mobile phone users to use their devices to make a purchase is lower, at 60%.   Of those making purchases from their mobile phone, the most common thing to buy is tickets (including movie and plane tickets) at 60%. This is followed by digital content (54%), clothes/shoes/jewellery (41%), books (25%), services (16%), consumer electronics (15%) and groceries (11%).   Finally, the report looked at the controversial topic of whether users prefer mobile websites or apps. It found 7% of users mostly use websites and 28% predominantly use websites.   In contrast, 3% use apps exclusively and 24% prefer to use apps. Around 25% of users use both equally, while 12% opt to use neither.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Government report identifies six disruptive tech trends to watch

11:07PM | Sunday, 23 November

Mobile messaging apps such as Whatsapp are killing traditional text messages while multi-screening is going mainstream, according to an Australian Communications and Media Authority.   The ACMA paper, titled Six emerging trends in media and communications, attempts to identify disruptive media and communications trends that “strain the effectiveness and efficiency of existing regulatory settings”.   Here are the six media and communications trends identified in the report:   1. Communications go over the top   Consumers are increasingly rejecting carrier-based phone calls and text messages in favour of apps and online services such as Apple iMessage, Facebook Messenger, Google Hangouts, Snapchat and Microsoft’s Skype.   According to the report, revenues from fixed line phone services have collapsed by 34% in five years, from $18.296 billion in 2008 to just $12.045 billion in 2013.   Over the same time frame, the number of voice over internet protocol (VOIP) users has surged from 2.1 million to 4.6 million.   However, this extra data users has been good news to mobile phone carriers, which have seen their revenues surge from $15.967 billion to $20.014 billion.   2. Consumers build their own links It’s not just the number of communications apps that is booming. Australian consumers are using them with a wider variety of devices, which are connected over a growing number of network technologies.   Consumers now regularly switch between fixed-line internet connections, Wi-Fi, mobile broadband and – especially in remote areas – satellite connections, depending on the time of day.   The number of devices they use is also increasing, with the number of Australians owning a tablet, laptop and smartphone increasing from 28% in 2013 to 53% in 2014.   3. Wearables are set to boom   On top of smartphones, tablets and laptops, the report predicts wearables (including Google Glass, smartwatches and fitness trackers) are set to become increasingly common over the coming years.   The report suggests the number of wearables worldwide will grow from 22 million in 2013 to 177 million in 2018.   It also predicts that an increase in the number of devices running Google’s Android Wear platform, along with the release of the Apple Watch early next year, will lead this trend to accelerate.   4. Online content is going mainstream   The internet is not just disrupting the way we communicate.   According to the report, consumers are increasingly viewing a greater number of TV services (including pay TV, broadcast TV, streaming TV and catch-up TV) delivered to a growing number of devices, over a growing number of network technologies.   In a typical week, 97% of Australians watch a free-to-air or pay TV service. By contrast, one-in-two Australians have watched online TV over the past six months. This includes professionally produced catch-up or streaming TV services, pirated movies and content from video sites such as YouTube.   Meanwhile, people aged between 16 and 24 now watch more TV over the internet than they do from broadcast television services.   5. Multistreaming is now mainstream   In many cases, new forms are television are complementing, rather than replacing older ones.   The report shows 74% of Australians with internet access regularly watched TV and used the internet at the same time, up 25 percentage points from 2009. It is as high as 89% for people aged 25 to 34.   Overall, 71% of people still prefer to watch TV shows and movies on television, compared to on mobile phones (5%), tablets (4%) and computers (29%).   Meanwhile, user-generated content is mostly watched on computers (71%) or mobile phones (41%), rather than tablets (17%) and televisions (10%).   6. TV is still the one for news   Finally, when it comes to getting the news, the more things change, the more they stay the same.   The report shows that 92% of free-to-air or subscription television viewers watched a news or current affairs programs on television in 2014.   While newspaper circulation has dived 18% between 2009 and 2013, the drop has been a drop of just 10% from TV over the same time.   Image credit: Flickr/alvy   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

prev
loading...
loading...
loading...