artificial intelligence


Ada Lovelace and the role models who guide women towards a life less ordinary

10:59AM | Friday, 16 October

 A century before the first computers, Ada Lovelace wrote a study on the potential of Charles Babbage’s yet-to-be-built Analytical Engine. Babbage’s Analytical Engine is regarded as the world’s first computer and Lovelace the world’s first computer programmer. She foresaw how Babbage’s design could be a general purpose computer, that it might manipulate not merely numbers but also music, even one day composing complex and scientific pieces. The Analytical Engine, she wrote: “weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves”.   Almost 200 years on, much of what she proposed is now possible. Software today can compute aspects of our understanding of music, while models of harmony and other musical elements can encompass music of ever increasing complexity. Software can analyse compositions and extract the underlying musical structures.   With the right software and inputs, computers can indeed now generate complex pieces – following in the style of, say, classical composer Toru Takemitsu or jazz master Art Tatum – by learning or mimicking recurring musical patterns.       Computer programs can also extricate the sensual dimensions of musical expressivity, for example, the subtle differences between two different musician’s performances of the same composition. Mathematical and computational models have become valuable tools to interrogate what we know about music, and to open up new possibilities for musical expression. “The Analytical Engine has no pretensions whatsoever to originate anything”, Lovelace wrote, but by making music and science amenable to calculations they are “thrown into new lights, and more profoundly investigated”.   Lovelace made prescient predictions about computing despite living in a time when women were denied education. How did she surmount the odds to make such remarkable insights about computing centuries before computers existed?   Helping hands Outstanding role models have been found to be especially important to women, indicating to women that “someone like me can be successful”. Lovelace was not short on role models: her mother Annabella was well-schooled by former Cambridge University professors in classics, philosophy, mathematics, and science, contrary to convention. In turn, Annabella ensured her daughter was taught science and mathematics by the best minds in England (albeit as an antidote to her father Lord Byron’s artistic “madness”). Among Ada’s mentors was the Scottish astronomer and mathematician Mary Somerville.   Role models are even more important because of the unconscious associations we inevitably make between gender and the kinds of activities deemed suitable or appealing for men and women. This implicit bias tends to limit women to stereotypical roles, such as caring rather than executive positions. Women are judged more harshly than men: students rate female university lecturers more negatively than male lecturers for the same performance. Applications for academic science positions are judged more favourably when associated with a male name, leading to a higher starting salary and more career mentoring. Women leaders who act assertively and authoritatively are viewed more negatively than men expressing the same traits. And so on.     Ada, Countess of Lovelace and ‘Enchantress of Numbers’, as Babbage called her. William Henry Mote/Ada Picture Library   So while stand-out female role models exist, they often do not have the same opportunities as their male counterparts. All-male lineups of keynote speakers at technology conferences are not uncommon, while women remain a minority in the highest echelons of classical music performance, composition and scholarship. The proportion of women working as I do at the intersection of music and technology, two male-dominated fields, is extremely small indeed.   Yet I was fortunate to have as mentor Jeanne Bamberger, professor of music and urban education at the Massachusetts Institute of Technology – a remarkable woman and a pioneer in music and artificial intelligence. A former child prodigy, Bamberger had studied with pianist Arthur Schnabel, theorist-composer Roger Sessions, and composers Olivier Messiaen and Ernst Krenek. A formidable woman unafraid of new ideas, she worked on music software such as Impromptu for music research and teaching. She introduced me to the early work of Christopher Longuet-Higgins and Mark Steedman, and inspired in me a lifelong passion to use mathematics and computing tools to investigate and explain what it is that musicians do, how we do it, and why.   While I’ve never questioned my choice to enter into this mathemusical world, it’s hard to ignore how few women there are. I was usually the only, or one of only two, female students in my computer science or mathematics classes, or on my operations research doctoral program. So it was with some satisfaction (deserved or not) that I found my doctoral dissertation at MIT on the mathematical modelling of tonality signed by four women: professors Jeanne Bamberger, Georgia Perakis (who preceded me in receiving the Presidential Early Career Award for Scientists and Engineers), Cynthia Barnhart, now chancellor of MIT, and myself.   So Ada Lovelace Day, this year marking the 200th anniversary of her birth, is a recognition of the need for visible and outstanding female role models in science, technology, engineering, and mathematics – and a celebration of the achievements of women working in these fields.   Lovelace may have been a computing pioneer, but the percentage of women studying computer science has plummeted since 1984 due to a lack of sense of belonging. This feeling, even more acute for women who veer off the beaten track into more esoteric fields, can be countered by education and role models – something we desperately need more of if we are to capitalise on the Ada Lovelaces of today.   Elaine Chew, Professor of Digital Media, Queen Mary University of London This article was originally published on The Conversation. Read the original article.

Rise of the humans: intelligence amplification will make us as smart as the machines

10:20PM | Tuesday, 13 October

 In January this year Microsoft announced the HoloLens, a technology based on virtual and augmented reality (AR).   HoloLens supplements what you see with overlaid 3D images. It also uses artificial intelligence (AI) to generate relevant information depending on the situation the wearer is in. The information is then augmented to the your normal vision using virtual reality (VR).   Microsoft’s HoloLens in action.   It left a lot of us imagining its potential, from video games to medical sciences. But HoloLens might also give us insight into an idea that goes beyond conventional artificial intelligence: that technology could complement our intelligence, rather than replacing it, as is often the case when people talk about AI.   From AI to IA Around the same time that AI was first defined, there was another concept that emerged: intelligence amplification (IA), which was also variously known as cognitive augmentation or machine augmented intelligence.   In contrast to AI, which is a standalone system capable of processing information as well as or better than a human, IA is actually designed to complement and amplify human intelligence. IA has one big edge over AI: it builds on human intelligence that has evolved over millions of years, while AI attempts to build intelligence from scratch.   IA has been around from the time humans first began to communicate, at least in a very broad sense. Writing was among the first technologies that might be considered as IA, and it enabled us to enhance our creativity, understanding, efficiency and, ultimately, intelligence.   For instance, our ancestors built tools and structures based on trial and error methods assisted by knowledge passed on verbally and through demonstration by their forebears. But there is only so much information that any one individual can retain in their mind without external assistance.   Today we build complex structures with the help of hi-tech survey tools and highly accurate software. Our knowledge has also much improved thanks to the recorded experiences of countless others who have come before us. More knowledge than any one person could remember is now readily accessible through external devices at the push of a button.   Although IA has been around for many years in principle, it has not been a widely recognised subject. But with systems such as HoloLens, IA can now be explicitly developed to be faster than was possible in the past.   From AR to IA Augmented reality is just the latest technology to enable IA, supplementing our intelligence and improving it.   The leap that Microsoft has taken with HoloLens is using AI to boost IA. Although this has also been done in various disparate systems before, Microsoft has managed to bring all the smaller components together and present it on a large scale with a rich experience.     Augmented Reality experience on HoloLens Microsoft   For example, law enforcement agencies could use HoloLens to access information on demand. It could rapidly access a suspect’s record to determine whether they’re likely to be dangerous. It could anticipate the routes the suspect is likely to take in a pursuit. This would effectively make the officer more “intelligent” in the field.   Surgeons are already making use of 3D printing technology to pre-model surgery procedures enabling them to conduct some very intricate surgeries that were never before possible. Similar simulations could be done by projecting the model through an AR device, like HoloLens.   Blurred lines Lately there has been some major speculation about the threat posed by superintelligent AI. Philosophers such as Nick Bostrom have explored many issues in this realm.   AI today is far behind the intelligence possessed by any individual human. However, that might change. Yet the fear of superintelligent AI is predicated on there being a clear distinction between the AI and us. With IA, that distinction is blurred, and so too is the possibility of there being a conflict between us and AI.   Intelligence amplification is an old concept, but is coming to the fore with the development of new augmented reality devices. It may not be long before your own thinking might be enhanced to superhuman levels thanks to a seamless interface with technology.   Alvin DMello, PhD Candidate, Queensland University of Technology This article was originally published on The Conversation. Read the original article.

The web has become a hall of mirrors, filled only with reflections of our data

9:32AM | Thursday, 10 September

 The “digital assistant” is proliferating, able to combine intelligent natural language processing, voice-operated control over a smartphone’s functions and access to web services. It can set calendar appointments, launch apps, and run requests. But if that sounds very clever – a computerised talking assistant, like HAL9000 from the film 2001: A Space Odyssey – it’s mostly just running search engine queries and processing the results.   Facebook has now joined Apple, Microsoft, Google and Amazon with the launch of its digital assistant M, part of its Messaging smartphone app. It’s special sauce is that M is powered not just by algorithms but by data serfs: human Facebook employees who are there to ensure that every request that it cannot parse is still fulfilled, and in doing so training M by example. That training works because every interaction with M is recorded – that’s the point, according to David Marcus, Facebook’s vice-president of messaging: We start capturing all of your intent for the things you want to do. Intent often leads to buying something, or to a transaction, and that’s an opportunity for us to [make money] over time.   Facebook, through M, will capture and facilitate that “intent to buy” and take its cut directly from the subsequent purchase rather than as an ad middleman. It does this by leveraging messaging, which was turned into a separate app of its own so that Facebook could integrate PayPal-style peer-to-peer payments between users. This means Facebook has a log not only of your conversations but also your financial dealings. In an interview with Fortune magazine at the time, Facebook product manager, Steve Davies, said: People talk about money all the time in Messenger but end up going somewhere else to do the transaction. With this, people can finish the conversation the same place started it.   In a somewhat creepy way, by reading your chats and knowing that you’re “talking about money all the time” – what you’re talking about buying – Facebook can build up a pretty compelling profile of interests and potential purchases. If M can capture our intent it will not be by tracking what sites we visit and targeting relevant ads, as per advert brokers such as Google and Doubleclick. Nor by targeting ads based on the links we share, as Twitter does. Instead it simply reads our messages.       ‘Hello Dave. Would you like to go shopping?’ summer1978/MGM/SKP, CC BY-ND   Talking about money, money talks M is built to carry out tasks such as booking flights or restaurants or making purchases from online stores, and rather than forcing the user to leave the app in order to visit a web store to complete a purchase, M will bring the store – more specifically, the transaction – to the app.   Suddenly the 64% of smartphone purchases that happen at websites and mobile transactions outside of Facebook, are brought into Facebook. With the opportunity to make suggestions through eavesdropping on conversations, in the not too distant future our talking intelligent assistant might say: I’m sorry Dave, I heard you talking about buying this camera. I wouldn’t do if I were you Dave: I found a much better deal elsewhere. And I know you’ve been talking about having that tattoo removed. I can recommend someone – she has an offer on right now, and three of your friends have recommended her service. Shall I book you in?   Buying a book from a known supplier may be a low risk purchase, but other services require more discernment. What kind of research about cosmetic surgery has M investigated? Did those three friends use that service, or were they paid to recommend it? Perhaps you’d rather know the follow-up statistics than have a friend’s recommendation.   Still, because of its current position as the dominant social network, Facebook knows more about us, by name, history, social circle, political interests, than any other single internet service. And it’s for this reason that Facebook wants to ensure M is more accurate and versatile than the competition, and why it’s using humans to help the AI interpret interactions and learn. The better digital assistants like M appear to us, the more trust we have in them. Simple tasks performed well builds a willingness to use that service elsewhere – say, recommending financial services, or that cosmetic treatment, which stand to offer Facebook a cut of much more costly purchase.   No such thing as a free lunch So for Facebook, that’s more users spending more of their time using its services and generating more cash. Where’s the benefit for us?   We’ve been trained to see such services as “free”, but as the saying goes, if you don’t pay for it, then it’s you that’s the product. We’ve seen repeatedly in our Meaningful Consent Project that it’s difficult to evaluate the cost to us when we don’t know what happens to our data.   People were once nervous about how much the state knew of them, with whom they associated and what they do, for fear that if their interests and actions were not aligned with those of the state they might find ourselves detained, disappeared, or disenfranchised. Yet we give exactly this information to corporations without hesitation, because we find ourselves amplified in the exchange: that for each book, film, record or hotel we like there are others who “like” it too.   The web holds a mirror up to us, reflecting back our precise interests and behaviour. Take search, for instance. In the physical world of libraries or bookshops we glance through materials from other topics and different ideas as we hunt down our own query. Indeed we are at our creative best when we absorb the rich variety in our peripheral vision. But online, a search engine shows us only things narrowly related to what we seek. Even the edges of a web page will be filled with targeted ads related to something known to interest us. This narrowing self-reflection has grown ubiquitous online: on social networks we see ourselves relative to our self-selected peers or idols. We create reflections.   The workings of Google, Doubleclick or Facebook reveal these to be two-way mirrors: we are observed through the mirror but see only our reflection, with no way to see the machines observing us. This “free” model is so seductive – it’s all about us – yet it leads us to become absorbed in our phones-as-mirrors rather than the harder challenge of engaging with the world and those around us.   It’s said not to look too closely at how a sausage is made for fear it may put you off. If we saw behind the mirror, would we be put off by the internet? At least most menus carry the choice of more than one dish; the rise of services like M suggests that, despite the apparent wonder of less effortful interactions, the internet menu we’re offered is shrinking. mc schraefel, Professor of Computer Science and Human Performance, University of Southampton This article was originally published on The Conversation. Read the original article.

How to embrace technology without dooming humanity to destruction

8:15AM | Friday, 7 August

The world today is facing some serious global challenges: creating sustainable development in the face of climate change, safeguarding rights and justice, and growing ethical markets, for a start. All of these challenges share some connection with science and technology – some more explicitly than others.   We are currently witnessing a growth in traditional technology – with computers processing data in new and exciting ways. We’re also seeing the birth of transformative technology, such as bioengineering. But the question is not about old or new technology – rather, it is about how they are being used to facilitate or change human behaviour.   Good tech, bad tech Developments in information and communication technology (ICT) are vitally important to help us make better, more informed choices about how we prepare for the future. For instance, democratic governance is about being able to articulate contesting views across society and from different parts of the government. The advent of the internet allows us to receive and spread such information. Likewise, security and public safety relies on having good information on risks and their potential threats. Consider, for example, the way police departments in New York and Memphis have been able to make better use of data to prevent crime.   While science and technology are giving us the tools to improve, they – and the people who use them – are also presenting serious problems. Technology connects us, but it also makes us vulnerable to cyber-attacks. The amount of information that we produce every day through our phones and computers can help shape our environment to cater to us. But it also means that our identities are perhaps more vulnerable than ever before, with smart phones and club cards tracking our every move.   Similarly, in biology, we are able to make amazing gains in physical corrections, repairs, amendments, and augmentations, whether replacing old limbs or growing new ones. But we must also seriously consider the issues around ethics, safety and security. The debate around gain of function experiments, which give diseases new properties to help us study them, is a good example.   Hopes and fears To help us grasp the shape and scope of these challenges, the Millennium Project – an international think tank – releases an annual State of the Future report, which outlines the major hurdles facing humanity over the next 35 years. It illustrates our complicated relationship with science and technology. Just as the beginning of the industrial revolution influenced the underlying themes of Mary Shelley’s Frankenstein, we too are worried about the unforeseen complications that the latest developments could bring.   The report tells us of the great hopes that synthetic biology will help us write genetic code like we write computer code; about the power of 3D printing to customise and construct smart houses; of the future of artificial intelligence where the human mind and the computer mind meet, rather than conflict.   Frankenstein bringing his monster to life. twm1340/flickr, CC BY-SA But at the same time, the authors of the report – Jerome Glenn, Elizabeth Florescu and their team – express fears that there is a great chance we could be outstripped in pace by the evolution of scientific and technological development. The authors suggest that we seek out human-friendly control systems, since advances in these fields mean that lone individuals could make and deploy weapons of mass destruction.   There are two concerns here: one to do with agency, the other relating to structures. Individuals have the potential to use scientific and technological advances to cause harm. This is a growing problem, as science and technology continues to degrade what Max Weber referred to as the state’s “monopoly on violence”.   To reduce the risks associated with agency, we will rely on structures that encourage good behaviour, such as systems for justice, education and the provision of basic necessities for life.   But it is not clear how we will arrive at such structures, and where the responsibility to develop them will fall; whether it’s to regions, states or international organisations. This is especially pressing, as many states have either foregone a welfare system, or are in the process of destroying it. It’s unclear where education and training come in, or how regulatory control is to work across so many local, national, societal, and commercial boundaries.   An ethical approach? Whether or not our global society is outstripped by science and technology largely depends on us. And this is part of the problem, as William Nordhaus warned us as early as 1982, in his work on the Global Commons. The report calls for an ethical approach to creating systems, forms of information, and models of control that would allow us to engage with science and technology as it develops.   This means embedding ethical considerations into the way we think about the future. The authors want a larger discussion on global ethics, such as that we have seen rooted in the work done by the International Organisation for Standardisation – the world’s largest developer of voluntary international standards.   Ultimately, where we end up in relation to science and technology is a matter of coming to terms with how we interact with these developments. Until we do so, a safe and prosperous world may elude us.   David J Galbreath is Professor of International Security, Director of Centre for War and Technology at University of Bath. This article was originally published on The Conversation. Read the original article.

Why we should welcome 'killer robots', not ban them

7:39AM | Friday, 31 July

The open letter signed by more than 12,000 prominent people calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless.   Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and writing about military robots, fuelling the very scare campaign that I now vehemently oppose.   I was even one of the hundreds of people who, in the early days of the debate, gave their support to the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots.   But I’ve changed my mind.   Why the radical change in opinion? In short, I came to realise the following.   The human connection The signatories are just scaremongers who are trying to ban autonomous weapons that “select and engage targets without human intervention”, which they say will be coming to a battlefield near you within “years, not decades”.   But, when you think about it critically, no robot can really kill without human intervention. Yes, robots are probably already capable of killing people using sophisticated mechanisms that resemble those used by humans, meaning that humans don’t necessarily need to oversee a lethal system while it is in use. But that doesn’t mean that there is no human in the loop.   We can model the brain, human learning and decision making to the point that these systems seem capable of generating creative solutions to killing people, but humans are very much involved in this process.   Indeed, it would be preposterous to overlook the role of programmers, cognitive scientists, engineers and others involved in building these autonomous systems. And even if we did, what of the commander, military force and government that made the decision to use the system? Should we overlook them, too?   We already have automatic killing machines We already have weapons of the kind for which a ban is sought.   The Australian Navy, for instance, has successfully deployed highly automated weapons in the form of close-in weapons systems (CIWS) for many years. These systems are essentially guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control, and are designed to provide surface vessels with a last defence against anti-ship missiles.   The Phalanx is just one of several close-in weapon systems used by the Australian Navy.     When engaged autonomously, CIWSs perform functions normally performed by other systems and people, including search, detection, threat assessment, acquisition, targeting and target destruction.   This system would fall under the definition provided in the open letter if we were to follow the signatories' logic. But you don’t hear of anyone objecting to these systems. Why? Because they’re employed far out at sea and only in cases where an object is approaching in a hostile fashion, usually descending in the direction of the ship at rapid speed.   That is, they’re employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat.   So why can’t we focus on existing laws, which stipulate that they be used in the most particular and narrow circumstances?   The real fear is of non-existent thinking robots It seems that the real worry that has motivated many of the 12,000-plus individuals to sign the anti-killer-robot petition is not about machines that select and engage targets without human intervention, but rather the development of sentient robots.   Given the advances in technology over the past century, it is tempting to fear thinking robots. We did leap from the first powered flight to space flight in less than 70 years, so why can’t we create a truly intelligent robot (or just one that’s too autonomous to hold a human responsible but not autonomous enough to hold the robot itself responsible) if we have a bit more time?   There are a number of good reasons why this will never happen. One explanation might be that we have a soul that simply can’t be replicated by a machine. While this tends to be the favourite of spiritual types, there are other natural explanations. For instance, there is a logical argument to suggest that certain brain processes are not computational or algorithmic in nature and thus impossible to truly replicate.   Once people understand that any system we can conceive of today – whether or not it is capable of learning or highly complex operation – is the product of programming and artificial intelligence programs that trace back to its programmers and system designers, and that we’ll never have genuine thinking robots, it should become clear that the argument for a total ban on killer robots rests on shaky ground.   Who plays by the rules? UN bans are also virtually useless. Just ask anyone who’s lost a leg to a recently laid anti-personnel mine. The sad fact of the matter is that “bad guys” don’t play by the rules.   Now that you understand why I changed my mind, I invite the signatories to the killer robot petition to note these points, reconsider their position and join me on the “dark side” in arguing for more effective and practical regulation of what are really just highly automated systems.   Jai Galliott is Research Fellow in Indo-Pacific Defence at UNSW Australia. This article was originally published on The Conversation. Read the original article.

Games by numbers: machine learning is changing sport

5:30AM | Tuesday, 19 May

The drive to improve performance means elite sport is inundated with data from wearable technologies such as GPS, computer vision and match statistics.   So professional clubs are constantly on the lookout for tools that can help turn these data into usable and meaningful information.   One such tool gaining popularity is machine learning. Put simply, machine learning is a form of artificial intelligence, whereby computers are able to learn without being explicitly programmed by a human operator.   What makes machine learning algorithms so useful is their ability to be trained on large pre-existing data sets. These trained algorithms can be used to identify potentially complex yet meaningful patterns in the data, which then allows us to predict or classify future instances or events.   Machine learning approaches often outperform traditional statistical techniques, which are largely incapable of accounting for the dynamic and almost random patterns in so much of the data obtained from sport. A game of footy Consider a typical game of elite Australian Rules football. During any match played in the Australian Football League (AFL), sources of information relating to player movement and performance are available in near real time to coaches and support staff.   Despite access to this information, the ability of coaches to observe, process and evaluate the actions of 18 players on different areas of the field is limited. And that doesn’t even include the opposition. As humans, coaches simply do not possess the capacity to undertake such a task successfully.   Don Norman summarises this predicament in his book Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. He says the power of the unaided human mind is overrated. This is due partly to our inability to overcome inherent limitations in areas such as memory and conscious reasoning:   Human memory is well tuned to remember the substance and meaning of events, not the details […] Humans can essentially attend to only one conscious task at a time. We cannot maintain attention on a task for extended periods.   In the sporting context, coaches are therefore limited with respect to their cognitive abilities. This can only be improved by developing external aids, such as machine learning, to enhance these skills.   Suppose we consider a short five-second period of play within an AFL match, featuring multiple players each undertaking different movement patterns and performing various skilled actions.   This brief section of the game could potentially be construed differently by multiple coaches working within a single team, depending on cognitive biases, including previous experience, prejudices and individual personality traits.   Machine learning thus has a place in providing rapid, objective evidence obtained from data in order to help inform coach decision-making. The science in sport But what about the scientific evidence for this approach? The peer-reviewed sports science literature is actually full of successful applications of machine learning to sport.   Examples from biomechanics, in particular, show extensive use of machine learning. Notably, pattern recognition algorithms have been developed to identify individual athlete movement sequences in a variety of different sports.   In soccer, machine learning match analysis has been used to identify the conservative strategies of away teams competing in the English Premier League. It has also been used to discover the optimal methods by which teams obtain a shot on goal, or return the ball in tennis.   Machine learning has been used to predict the behaviour of individual athletes, such as cricket bowlers in the Indian Premier League, and team performance in the Asian Games based on factors varying from athlete age and experience levels to national social-cultural factors.   Researchers from the University of Coimbra, in Portugal, successfully implemented a suite of machine learning algorithms to identify talented basketball players based on their psychological characteristics and practice history.   A team from Deakin University developed a set of rules to explain the physical characteristics most strongly linked with Australian Rules football draft success.   A body of work focusing on automated classification of human movements including kicking, running and jumping using wearable technologies has also emerged.   Machine learning is also being used to help predict the return-to-play time for soccer players following an injury and in selecting the appropriate balance of batsmen and bowlers in a cricket match. Machine learning vs human coaching Improvements in technology and machine learning continue to progress the field towards artificial intelligence and real-time use in sport. But is it possible that computers will ever replace the coach?   Well, in some ways, they already have. Many elite sporting clubs already set specific thresholds for athletes during training. These are based on perceived reductions in performance or increases in injury risk if this threshold is overcome.   The judgement on what is appropriate treatment of the athlete is made solely by a computer-based analysis of data collected in the field. For the moment, at least, the decision on whether to act or not on this information still remains with the coach.   But in future that may well change.   Sam Robertson is Senior Research Fellow Victoria University/Western Bulldogs at Victoria University.   This article was originally published on The Conversation. Read the original article.

Apps tackling mental health issues and stress dominate Outware Mobile’s health hackathon

5:59AM | Tuesday, 12 May

Apps tackling issues around mental health and stress dominated the winners of Outware Mobile’s recent health hackathon.   Seventeen teams worked from the startup’s Richmond office to create working software within just eight hours.   The app development company partnered with organisations such as VicHealth to solve real-world problems that face the health sector – everything from monitoring rheumatoid disease to improving access to suitable sport and recreation facilities for people living with a disability.   The winners of the hackathon were:   3+Things, an iOS app encouraging people to reflect on what they’re grateful for in life witht eh aim of increasing overall happiness and combating depression. The Flare Diary, an Android app aimed at helping users monitor their pain’s severity and location by audio or touch. Swipe for Sport, an Android app to help people find sport and recreational facilities. Schmooze, an iOS app that uses artificial intelligence to lighten the mood by simulating a conversation – allowing people to work through their problems and emotions. Choice-O-Matic, an iPad game using positive reinforcement to teach young people how to open up about issues and start difficult conversations. Uplift, an iOS game that teaches young people how to manage stress by practising their breathing.   Co-founder and director of Outware Mobile, Gideon Kowadlo, told StartupSmart the company decided to focus on health because it is an industry it is passionate about and particularly interested in.   “We have some existing clients in the health industry and we wanted to explore the area further,” he says.   “It’s also an area that can directly provide benefit to people and we wanted to generate some ideas in this space. Despite the fact that there are barriers to entry in terms of regulation, there are still a lot you can achieve and there’s a lot of exciting potential out of mobile and new developments.”   Kowadlo says while a theme hasn’t been chosen for the next hackathon, it will likely focus on a specific sector as it’s a great way to “dive deeper” into how innovation can be applied to a particular industry.   “While there were 17 entries all together and only a small handful of winners, a huge number of others could equally become useful apps,” he says.   “The important thing is the conversation it starts – we had partners there that were providing the challenges and they were all energised by the day as well. And that then starts them thinking about what the possibilities are and also starts a conversation in their organisations and the community.”   Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Why a robot could do your job

4:53AM | Monday, 20 April

Here’s a game to play over dinner. One person names a profession that they believe can’t be taken over by a machine, and another person has to make a case why it’s not so future-proof. We played this game on an upcoming episode of SBS’s Insight on the topic of the future of robots and artificial intelligence.   The first profession suggested was musician. An argument often put forwards against artificial intelligence (AI) is that computers can’t be creative. But there are plenty of examples to counter this argument. For instance, computers can take plain sheet music and turn it into an expressive jazz performance, as my colleague Ramon Lopez de Mantaras has shown.   So, jazz musicians watch out. Your jobs might not be safe from robot incursion.   The next option was police officer. It’s often said that computers can’t or won’t behave ethically. Unfortunately, Hollywood has already painted a very dystopian picture here in movies like Robocop and Terminator. And, as the current UN campaign to ban autonomous weapons demonstrates, we could easily end up there if we aren’t careful.   The third profession put forward was human resources. Naturally, this came from an HR consultant worried for her future job prospects. However, the bureaucratic side of HR is already easily automated. Indeed, we spend much of our lives on the phone already talking to machines. Can I speak to a real person, please?   On the other hand, the more human-facing side of HR is likely to be harder to automate. But as we argue in the next answer, it’s not clear that this will be impossible.   The fourth challenge was psychiatrist. Again, the human-facing nature of this would seem to offer significant resistance to automation. Nevertheless, there’s an interesting historical precedent.   A well known computer program called Eliza was the very first chatterbot. It unintentionally passed itself off as a real Rogerian psychotherapist.   Eliza was not very smart. Indeed, the program’s author, Joseph Weizenbaum, meant it more as parody that as therapist. However, his secretary famously asked to be left alone so she could talk in private to the chatterbot.   So, shrinks watch out. Your jobs might not be safe.   The final challenge was Prime Minister.   On the one hand, this is a good answer, as one assumes there’s little routine to being Prime Minister but a lot of tough high level decision making that would be tough for a machine to handle. On the other hand, it’s a poor winner of our little game. It may be the only job in the whole country that’s safe from robots.   In one final, beautiful irony, this forthcoming episode of Insight has the robots up on the stage. We, the supposed expert commentators were in the audience. So, even TV pundits should watch out. Your jobs might not be safe too. Net effects What this discussion highlights is that the middle classes are likely to be increasingly squeezed by machine labour. Professions that we used to think were quite safe – like doctor, lawyer or accountant – will be increasingly automated.   Whenever technology takes away jobs, it tends to make new jobs and industries elsewhere. For example, printing removed the need for scribes but created the vast publishing industry in its stead. And publishing went on to create many other jobs in the industries that grew out of all the knowledge passed on in printed material.   More recently, computers have taken away many traditional jobs in the printing industry, like type setters. But we now see many new jobs in areas like self-publishing and web design.   Economists continue to argue over the net effects of technology. Does technology create more economic activity so we are all better off? Or does it put more people out of work, concentrating wealth in the hands of the few?   One thing seems sure. It requires us to adapt. And for this, we need an educated, high tech workforce. This brings the conversation back to higher education and the stalled reforms that now trouble this sector in Australia.   If there is one policy we need to get right, to future-proof Australia against machines and other disruptions, I would argue, this is it.   This article was originally published on The Conversation. Read the original article.

THE NEWS WRAP: Ashton Kutcher launches new venture capital fund

3:30PM | Sunday, 15 March

Ashton Kutcher and his business partner Guy Oseary are launching a new venture capital fund called Sound Ventures.   TechCrunch reports the fund will be stage-agnostic, allowing the pair to invest in later-stage startups.   Kutcher has previously invested in companies such as Uber, Spotify and Airbnb through his first fund A-Grade Investments.   The actor and tech investor was in Australia last month for the Tech My Way conference, where he speculated that virtual and augmented reality, biotechnology and artificial intelligence were the next big things in tech. Controversial app developer slams critics An Aussie app developer who promised to give thousands of dollars to charity and was exposed for not handing over the money has hit back in a rambling Facebook post.   Belle Gibson, the founder of The Whole Pantry, solicited donations from around 200,000 people for various causes and said she would give away a quarter of her company’s profits – however, an investigation by The Age found no such contributions were ever made.   Now the entrepreneur has hit back, according to Fairfax, writing in a Facebook post that those who were speaking to the media about her were bullying “myself and my family”.   “I know the work my company and it’s [sic] contents did changed [sic] hundreds of thousands for the better,” she said. YouTube could be considering a subscription model for premium content YouTube could soon have its own paid video on demand service, according to The Verge.   The company is exploring the concept as a means to improve its bottom line and allow popular content producers to access a higher percentage of ad revenue.   The rumours come from an unnamed executive at a company that partners with YouTube to produce video content.   Competition between streaming providers has heated up in the past 12 months, with Netflix confirming it is launching in Australia on March 24 and taking on local companies Quickflix and EzyFlix. Overnight The Dow Jones Industrial Average is down 145.91 points, falling 0.82% overnight to 17,749.31. The Aussie dollar is currently trading at around 76.23 US cents.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

To stop the machines taking over we need to think about fuzzy logic

3:41AM | Wednesday, 11 March

Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.   The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.   One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.   We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.   Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.   If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.   This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.   These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency. Fuzzy sets and AI   One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.   In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.   But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.   The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).   For example, our human-manager from the earlier example may say to human resources:   IF the job applicant is aged 25 to 30 AND has a qualification in philosophy OR literature THEN arrange an interview.   This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.   But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:   IF the applicant is tall AND attractive THEN the salary offered should be higher.   The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?   Attractiveness is also subjective, even if we only account for the taste of the single human-manager. Grey areas and fuzzy sets   In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.   One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.   To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).   Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements. Human–robot symbiosis   An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.   For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.   In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.   This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).   Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.   In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it. An irrational fear?   This brings us back to the big fear of AI machines turning on us in the future.   The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.   The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.   These risks, however, are the same that come with any human-made system or object.   So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.   Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.   Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.   When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.   *This article originally appeared at The Conversation.

Meet the deep learning tools that can beat you at classic arcade games – without reading the manual

2:43AM | Thursday, 26 February

Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.   In a groundbreaking paper published today in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.   What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.   It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.   But by playing lots and lots of games many times over, the computer learnt first how to play, and then how to play well. A machine that learns from scratch This is the latest in a series of breakthroughs in deep learning, one of the hottest topics today in artificial intelligence (AI).   Actually, DeepMind isn’t the first such success at playing games. Twenty years ago a computer program known as TD-Gammon learnt to play backgammon at a super-human level also using a neural network.   But TD-Gammon never did so well at similar games such as chess, Go or checkers (draughts).   In a few years time, though, you’re likely to see such deep learning in your Google search results. Early last year, inspired by results like these, Google bought DeepMind for a reported UK£500 million.   Many other technology companies are spending big in this space.   Baidu, the “Chinese Google”, set up the Institute of Deep Learning and hired experts such as Stanford University professor Andrew Ng.   Facebook has set up its Artificial Intelligence Research Lab which is led by another deep learning expert, Yann LeCun.   And more recently Twitter acquired Madbits, another deep learning startup. What is the secret sauce behind deep learning? Geoffrey Hinton is one of the pioneers in this area, and is another recent Google hire. In an inspiring keynote talk at last month’s annual meeting of the Association for the Advancement of Artificial Intelligence, he outlined three main reasons for these recent breakthroughs.   First, lots of Central Processing Units (CPUs). These are not the sort of neural networks you can train at home. It takes thousands of CPUs to train the many layers of these networks. This requires some serious computing power.   In fact, a lot of progress is being made using the raw horse power of Graphics Processing Units (GPUs), the super fast chips that power graphics engines in the very same arcade games.   Second, lots of data. The deep neural network plays the arcade game millions of times.   Third, a couple of nifty tricks for speeding up the learning such as training a collection of networks rather than a single one. Think the wisdom of crowds. What will deep learning be good for? Despite all the excitement though about deep learning technologies there are some limitations over what it can do.   Deep learning appears to be good for low level tasks that we do without much thinking. Recognising a cat in a picture, understanding some speech on the phone or playing an arcade game like an expert.   These are all tasks we have “compiled” down into our own marvellous neural networks.   Cutting through the hype, it’s much less clear if deep learning will be so good at high level reasoning. This includes proving difficult mathematical theorems, optimising a complex supply chain or scheduling all the planes in an airline. Where next for deep learning? Deep learning is sure to turn up in a browser or smartphone near you before too long. We will see products such as a super smart Siri that simplifies your life by predicting your next desire.   But I suspect there will eventually be a deep learning backlash in a few years time when we run into the limitations of this technology. Especially if more deep learning startups sell for hundreds of millions of dollars. It will be hard to meet the expectations that all these dollars entail.   Nevertheless, deep learning looks set to be another piece of the AI jigsaw. Putting these and other pieces together will see much of what we humans do replicated by computers.   If you want to hear more about the future of AI, I invite you to the Next Big Thing Summit in Melbourne on April 21, 2015. This is part of the two-day CONNECT conference taking place in the Victorian capital.   Along with AI experts such as Sebastian Thrun and Rodney Brooks, I will be trying to predict where all of this is taking us.   And if you’re feeling nostaglic and want to try your hand out at one of these games, go to Google Images and search for “atari breakout” (or follow this link). You’ll get a browser version of the Atari classic to play.   And once you’re an expert at Breakout, you might want to head to Atari’s arcade website.   This article was originally published on The Conversation. Read the original article.

Is Apple making an electric car to battle Tesla, Google or Climate Change?

2:10AM | Wednesday, 18 February

If you thought it has been a while since you heard any more rumours about the long-awaited Apple TV, they are about to be replaced by even more exciting possibility that Apple may be about to build an electric car. The Wall Steet Journal kicked things off with a report that Apple had been hiring “hundreds” of staff with automotive design skills to work on a project called “Titan” that may be a self-driving electric vehicle configured in a (not-so-exciting) mini-van design.   There are several back-stories to this potential move by Apple. In one, we see continuing competition with rival Google, who has been working on a driverless car for some time and are saying that they will be launching a commercial version onto the market between 2017 and 2020. Google’s motivation behind the self-driving car has been the development of the artificial intelligence software capable of pulling off this feat. Even if the car is not successful, the AI software will have a range of applications and possibility that would make the project still worthwhile. Increasingly, Apple has shown its willingness to develop its own capability in a range of competitive technologies that it can incorporate into products.   In another back-story, there is electric car company Tesla whose CEO, Elon Musk, has claimed that it will be as big financially, as Apple, within a decade. This will in part be based on the release of the Model 3, an affordable (US $35,000) family car with a range of 200 miles. Part of Tesla’s strategy appears to include the poaching of numerous Apple staff. Although it seems that Apple has been reciprocating by offering Tesla staff large signing bonuses to move to Apple.   And finally there is the view that electric cars, self-driving or otherwise, represent the future of transportation, especially a climate-friendly and sustainable one. At first sight, this may be a bit hard to believe when you consider that the top 3 selling vehicles in the US in 2014 were “pickup trucks”. At the same time, hybrid electric vehicles represented less than 3% of all cars sold. Still, there is continuing interest by the car manufacturers in producing electric cars, if only as a hedge. GM has announced their new 200 mile range Chevy Bolt that will retail at around the same price as Tesla’s Model 3.   There is little doubt that Apple could move into car manufacturing. With US $180 billion in cash, it could buy Fiat Chrysler, Tesla, General Motors and Ford outright.   There is also no doubt that with its ability to bring design and innovative computing to an industry employing technology that significantly lags that found in an iPhone. Apple and Google have both made moves to create in-vehicle media interfaces based on their systems. Apple’s CarPlay will start to appear in cars this year. Customers who can’t wait can buy after-market devices from Pioneer.   Apple’s motivation to build an electric car may be driven by competition with Google, Tesla and others. It may be also finding a new business that doubles its value to $1.3 trillion as predicted by Carl Icahn. Alternatively however, it may be genuinely interested in building a technology that makes driving more sustainable and less dependent on oil. Apple is set to invest $3 billion in new solar farms in California and Arizona to provide energy for its operations there. Apple CEO Tim Cook recently told investors:   “We know that climate change is real,” Cook said on Tuesday. “Our view is that the time for talk has passed, and the time for action is now. We’ve shown that with what we’ve done.”   Whether Apple’s electric cars are aimed at combating climate change will depend on how they are manufactured and how the recharging infrastructure, which is still largely to be built in the US and globally, is run. Apple throwing its weight behind this infrastructure being built at all would certainly help making electric cars a more popular possibility.   This article was originally published at The Conversation.

Data mining the new black box of self-driving cars

10:36AM | Tuesday, 21 October

Autonomous vehicles, or self-driving cars, are likely to be seen more widely on roads in 2015.   Already, legislation authorising the use of autonomous vehicles has been introduced in the US states of Nevada, Florida, California and Michigan, with similar legislation being planned for the UK. To date, these laws have focused on legalising the use of autonomous vehicles and dealing, to an extent, with some of the complex issues relating to liability for accidents.   But as with other emerging disruptive technologies, such as drones and wearables, it is essential that issues relating to user privacy and data security are properly addressed prior to the technologies being generally deployed.   Understanding autonomous vehicles   There is no single, uniform design for autonomous vehicles. Rather, it is best to understand an autonomous vehicle as a particular configuration of a combination of applications, some of which – such as adaptive cruise control, lane departure warnings, collision avoidance and parking assistance – are already part of current car design.   The most well-known prototype, Google’s self-driving car, uses a variety of technologies, including: a laser range finder (LIDAR) that generates a detailed 3D map of the environment; radars; cameras for detecting traffic lights; and a GPS. Other projects, including prototypes being developed by Mercedes-Benz, Volkswagen, Toyota and Oxford University, use different combinations of technologies.   This means that the privacy and data security problems arising from autonomous vehicles depend upon the precise technologies applied in any particular design. Some generalisations are, however, possible.   The relationship between the virtual and the real   The rules (or “code”) governing the online world have been different to those that apply offline. For example, online activities invariably generate digital traces, including metadata, which can be used to build profiles of users.   With emerging technologies, such as drones, wearables and autonomous vehicles, we are increasingly seeing the transposition of virtual models onto the real. One consequence of the range of sensors and data collection devices being deployed (and interconnected) is that our offline activities can leave traces at least as extensive as those generated online.   One way to understand types of autonomous vehicles is by reference to the kind of data collected and the ways in which that data is processed. For instance, autonomous vehicles often incorporate event recorders, or “black boxes”, to provide essential information in the event of an accident. This raises questions about who has rights to this data and about who can have access to the data.   Anonymising data   There is an overlap here with questions of liability, as insurance companies have clear incentives to collect as much data about user behaviour as possible. The potential for intrusive surveillance of personal activities is particularly jarring, as the car has been an archetypal space of personal privacy and freedom.   A fundamental distinction must be drawn between self-contained autonomous vehicles, in which the data collected from sensor devices installed in the car are stored and processed in the vehicle itself, and interconnected vehicles, in which data is shared with a centralised server and, potentially, with other vehicles.   Regardless of whether a vehicle is self-contained or interconnected, design decisions have to be made about whether or not the data collected is anonymised or linked to individual users. If the data is not anonymised, especially with interconnected vehicles, this poses serious surveillance threats. After all, once the data exists, and especially if it is connected to a server, it is vulnerable to access by third parties.   It is possible to envisage implementations of autonomous vehicles where data about a particular user is linked to other data sources, such as an online profile, for purposes such as tracking or marketing. This might take the form of personalised advertising displayed in the car, or even adjusting a vehicle’s route so that it passes retail outlets which match a user’s imputed preferences.   What else is at stake: human autonomy and hacking   We are now familiar with technologies, such as predictive search, which in the online context, attempt to predict what we want to do and make more or less persuasive suggestions.   It is likely that some versions of autonomous vehicles will implement predictive technologies. In any case, the progressive delegation of human decisions to machines raises system-wide questions about the cumulative impact on human autonomy: the more people are habituated to decisions being made for them, the less likely they may be to make their own decisions.   We are also now depressingly familiar with the vulnerability of computer systems to malicious third parties. Just as effective data security is essential to online safety, autonomous vehicles must be designed with a high level of data security, especially given the potentially calamitous consequences of hacked vehicles. As interconnected data processing systems are progressively rolled out in applications such as wearables and autonomous vehicles, we seem likely to see an offline version of the same sort of perpetual guerrilla warfare played out online between information security and hackers.   Protecting privacy at the design stage   Autonomous vehicles promise significant social and economic benefits, especially in potential improvements to road safety. There are, nevertheless, considerable legal and regulatory challenges. As with other emerging disruptive technologies, it is vital that privacy and anonymity be properly protected at the design stage.   To date, in the face of significant challenges relating to the legality of autonomous vehicles and liability issues, the privacy rights of users have been relatively neglected. But unless the era of artificial intelligence is to be accompanied by us sleepwalking into ubiquitous surveillance, we must recognise that safety and security needs to be balanced against the legitimate rights of people to control their own data and to retain their fundamental rights to privacy.   David Lindsay is a board member of the Australian Privacy Foundation. This article was originally published on The Conversation. Read the original article.

From art to science: Interest in robotics is on the move, but finance remains an issue

10:37AM | Thursday, 2 October

The cofounder of a pioneering Sydney-based robotics startup, with a Powerhouse Museum display and a successful crowdfunding campaign under its belt, says the sector is set to get much bigger but finance for projects remains an issue.   Robological cofounder Damith Herath told Private Media there are a number of exciting robotics startups founded by Australians, including Marathon Robotics and Navisens, and the sector is gaining momentum globally.   “It’s kinda like the ‘70s in computing and the ‘90s in the web. It’s the same feeling in the robotics community and the general consensus is it’s getting a lot bigger,” Herath says.   “A few good examples are some of the startups Google has recently purchased, or Baxter, or Cynthia Breazeal, who quit her job at MIT to do a startup called Jibo and raised $2 million on Indiegogo.   “But we have to be careful, because a lot of people over-promise and under deliver. Robots will move into other spaces, though not in the anthropomorphic sense.   “One of the issues is finding people to finance you is tricky, especially for hardware. People are more comfortable with apps and things that get a quick return on their investment.”   In January, Robological raised $3031 on Indiegogo for Ro-buddy, a pre-built board that integrates with an Android app, making it easy to build a robot without needing to learn a programming language such as C. Herath says the startup is finalising the board for fabrication in China.   “You can build a Raspberry Pi robot straight away, plug in a camera and motors, and within 10 to 20 minutes you have a spy cam working with the Android app,” he says.   “We think it’s useful because it’s in the pro-maker space, but it’s not as complex as Arduino. So if you’re building something in home automation, you can get something going with Android.”   Aside from Ro-buddy, Herath says Robological does consulting and research work, including working as a research partner with the Australian distributor for Rethink Robotics’ Baxter robot and on Curtin University’s Alternative Anatomies project.   It is also “chipping away” on a variation of the cloud-based internet of things robotics ideas put forward by UC Berkley professor Ken Goldberg, although Herath is remaining tight-lipped about what the project involves.   The startup began with a robotics display called the Articulated Head, which was on exhibit for two years at Sydney’s Powerhouse Museum.   “The three founders – Zhengzhi Zhang, Christian Kroos and I – met at the University of Western Sydney six years ago on a project called Thinking Ahead, which was a project of the Australian Research Council into AI (artificial intelligence).   “We each had a slightly different background, myself with robotics engineering, Zhang with software engineering and Christian with linguistic and cognitive science.   “Stelarc is one of the top performing artists in the world; an Australian artist who’s done a lot of work with robotics on stage and theatre. And the project I worked on was conceived of by Stelarc.”   The project ended when funding ended, but this allowed the team to develop valuable intellectual property on robots and human interaction. The founders decided to form Robological to continue their research.   One of its first projects was called Adopt a Robot, a research project looking into interactions between humans and robots.   “It got a lot of good publicity because it captured the public imagination. We gave away seven robots and over six months we changed its behaviour and added a face… Each person who got a robot had to care for it and fill out a questionnaire every four to six weeks,” Herath says.   Next month, Robological will jointly organise a workshop on robots and art with Curtin University as part of the Sixth International Conference on Social Robotics in Sydney. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Forget the clutch, self-driving cars need ‘adjustable ethics’ set by owners

9:34AM | Wednesday, 10 September

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car’s actions.   One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.   People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.   Self-drive is already here   With self-driving vehicles already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.   Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like adaptive cruise control, automatic braking, lane-keeping and parking assist.   People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.   Are they safe?   After almost 500,000km of on-road trials in the US, Google’s test cars have not been in a single accident while under computer control.       Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.   The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.   This is an adaptation of the “trolley problem” that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.   An astute reader will point out that, under normal conditions, the car’s collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.   Story continues on page 2. Please click below. Who is to blame for the deaths?   If car makers install a “do least harm” instruction and the car kills someone, they create legal liability for themselves. The car’s AI has decided that a person shall be sacrificed for the greater good.   Had the car’s AI not intervened, it’s still possible people would have died, but it would have been you that killed them, not the car maker.   Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.   As Patrick Lin points out the options are many. You could be:   democratic and specify that everyone has equal value pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example self-centred and specify that your life should be preserved above all materialistic and choose the action that involves the least property damage or legal liability.   While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user’s choices – though the maker could still be faulted for giving the user a choice in the first place.   Let’s say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.   People want choice   Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.   About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.   In Lin's view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker’s liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.   In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.   David Tuffley does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations. This article was originally published on The Conversation. Read the original article.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Best of the Web: Pivots, robots and the end of sleep

6:18PM | Sunday, 22 June

No sleep needed: New technologies are emerging that could radically reduce our need to sleep - if we can bear to use them, writes Jessa Gamble for aeon magazine.   Imagine a disease that cuts your conscious life by one-third. You would clamour for a cure. We’re talking about sleep. There may be no cure yet for sleep, but the palliatives are getting better.   “Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day,” Gamble writes.   “To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.”   Dilbert does startup: When Dilbert cartoonist Scott Adams turned himself to entrepreneurship, he wasn’t prepared for some of the weirder ways of Silicon Valley. Describing himself as an “embedded journalist” this week he takes on the pivot.   “The Internet is no longer a technology,” Adams writes.   “The Internet is a psychology experiment. Building a product for the Internet is the easy part.   “Getting people to understand the product and use it is the hard part. The only way to make the hard part work is by testing one hypothesis after another. Every entrepreneur is a behavioral psychologist with the tools to pull it off.”   And he’s distilled it all down in “the system”, which looks like this:   1.      Form a team 2.      Slap together an idea and put it on the Internet. 3.      Collect data on user behavior 4.      Adjust, pivot, and try again   What the gospel of innovation gets wrong: “In the last years of the nineteen-eighties, I worked not at startups but at what might be called finish-downs,” write Jill Lepore in a piece titled ‘The Disruption Machine’ in The New Yorker.   Lepore’s thesis is that Clayton Christensen’s theory of disruption, accepted across American industry as “the gospel of innovation”, is wobbly at best because it rests on a group of handpicked case studies that prove little or nothing.   “Most of the entrant firms celebrated by Christensen as triumphant disrupters no longer exist, their success having been in some cases brief and in others illusory,” writes Lepore.   Anyone who has anything to do with the startup industry will relate to this point:   “Ever since “The Innovator’s Dilemma,” everyone is either disrupting or being disrupted,” she writes.   “There are disruption consultants, disruption conferences, and disruption seminars. This fall, the University of Southern California is opening a new program: “The degree is in disruption,” the university announced. “Disrupt or be disrupted,” the venture capitalist Josh Linkner warns in a new book, “The Road to Reinvention,” in which he argues that “fickle consumer trends, friction-free markets, and political unrest,” along with “dizzying speed, exponential complexity, and mind-numbing technology advances,” mean that the time has come to panic as you’ve never panicked before.”   Don’t worry about the robots: Venture capitalist Marc Andreessen does not believe that robots will eat jobs.   “Robots and AI are not nearly as powerful and sophisticated as people are starting to fear, writes Andreessen,   “With my venture capital hat on I wish they were, but they’re not. There are enormous gaps between what we want them to do, and what they can do. There is still an enormous gap between what many people do in jobs today, and what robots and AI can replace. There will be for decades.”   Image credit: Flickr/jdhancock

US start-up raises $11.5 million for web-connected teddy bear

3:55AM | Monday, 11 March

The struggles of the Australian toy market have been put into sharp focus by a US-based start-up founded by a former Pixar executive, which has raised $11.5 million for an internet-connected, artificially intelligent teddy bear.

How start-ups cured my big business boredom

9:59AM | Wednesday, 26 September

David Urpani doesn’t like to stay still for long. He went from being an architect to a doctor of artificial intelligence to the founder of insurance comparison giant iSelect in 2000.

Consumers trade old mobile phones for cash using ecoATM

9:36AM | Tuesday, 25 September

A new recycling “ATM” will take an old mobile phone and pay an agreed price on the spot, taking the concept of bartering to a new level.

Five sectors set to thrive beyond the mining boom

7:44AM | Thursday, 26 July

Concerns were raised by economic soothsayers this week when a new report predicted the end of the mining boom – Australia’s runaway success story – within two years.