artificial intelligence

Latest

Games by numbers: machine learning is changing sport

5:30AM | Tuesday, 19 May

The drive to improve performance means elite sport is inundated with data from wearable technologies such as GPS, computer vision and match statistics.   So professional clubs are constantly on the lookout for tools that can help turn these data into usable and meaningful information.   One such tool gaining popularity is machine learning. Put simply, machine learning is a form of artificial intelligence, whereby computers are able to learn without being explicitly programmed by a human operator.   What makes machine learning algorithms so useful is their ability to be trained on large pre-existing data sets. These trained algorithms can be used to identify potentially complex yet meaningful patterns in the data, which then allows us to predict or classify future instances or events.   Machine learning approaches often outperform traditional statistical techniques, which are largely incapable of accounting for the dynamic and almost random patterns in so much of the data obtained from sport. A game of footy Consider a typical game of elite Australian Rules football. During any match played in the Australian Football League (AFL), sources of information relating to player movement and performance are available in near real time to coaches and support staff.   Despite access to this information, the ability of coaches to observe, process and evaluate the actions of 18 players on different areas of the field is limited. And that doesn’t even include the opposition. As humans, coaches simply do not possess the capacity to undertake such a task successfully.   Don Norman summarises this predicament in his book Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. He says the power of the unaided human mind is overrated. This is due partly to our inability to overcome inherent limitations in areas such as memory and conscious reasoning:   Human memory is well tuned to remember the substance and meaning of events, not the details […] Humans can essentially attend to only one conscious task at a time. We cannot maintain attention on a task for extended periods.   In the sporting context, coaches are therefore limited with respect to their cognitive abilities. This can only be improved by developing external aids, such as machine learning, to enhance these skills.   Suppose we consider a short five-second period of play within an AFL match, featuring multiple players each undertaking different movement patterns and performing various skilled actions.   This brief section of the game could potentially be construed differently by multiple coaches working within a single team, depending on cognitive biases, including previous experience, prejudices and individual personality traits.   Machine learning thus has a place in providing rapid, objective evidence obtained from data in order to help inform coach decision-making. The science in sport But what about the scientific evidence for this approach? The peer-reviewed sports science literature is actually full of successful applications of machine learning to sport.   Examples from biomechanics, in particular, show extensive use of machine learning. Notably, pattern recognition algorithms have been developed to identify individual athlete movement sequences in a variety of different sports.   In soccer, machine learning match analysis has been used to identify the conservative strategies of away teams competing in the English Premier League. It has also been used to discover the optimal methods by which teams obtain a shot on goal, or return the ball in tennis.   Machine learning has been used to predict the behaviour of individual athletes, such as cricket bowlers in the Indian Premier League, and team performance in the Asian Games based on factors varying from athlete age and experience levels to national social-cultural factors.   Researchers from the University of Coimbra, in Portugal, successfully implemented a suite of machine learning algorithms to identify talented basketball players based on their psychological characteristics and practice history.   A team from Deakin University developed a set of rules to explain the physical characteristics most strongly linked with Australian Rules football draft success.   A body of work focusing on automated classification of human movements including kicking, running and jumping using wearable technologies has also emerged.   Machine learning is also being used to help predict the return-to-play time for soccer players following an injury and in selecting the appropriate balance of batsmen and bowlers in a cricket match. Machine learning vs human coaching Improvements in technology and machine learning continue to progress the field towards artificial intelligence and real-time use in sport. But is it possible that computers will ever replace the coach?   Well, in some ways, they already have. Many elite sporting clubs already set specific thresholds for athletes during training. These are based on perceived reductions in performance or increases in injury risk if this threshold is overcome.   The judgement on what is appropriate treatment of the athlete is made solely by a computer-based analysis of data collected in the field. For the moment, at least, the decision on whether to act or not on this information still remains with the coach.   But in future that may well change.   Sam Robertson is Senior Research Fellow Victoria University/Western Bulldogs at Victoria University.   This article was originally published on The Conversation. Read the original article.

Apps tackling mental health issues and stress dominate Outware Mobile’s health hackathon

5:59AM | Tuesday, 12 May

Apps tackling issues around mental health and stress dominated the winners of Outware Mobile’s recent health hackathon.   Seventeen teams worked from the startup’s Richmond office to create working software within just eight hours.   The app development company partnered with organisations such as VicHealth to solve real-world problems that face the health sector – everything from monitoring rheumatoid disease to improving access to suitable sport and recreation facilities for people living with a disability.   The winners of the hackathon were:   3+Things, an iOS app encouraging people to reflect on what they’re grateful for in life witht eh aim of increasing overall happiness and combating depression. The Flare Diary, an Android app aimed at helping users monitor their pain’s severity and location by audio or touch. Swipe for Sport, an Android app to help people find sport and recreational facilities. Schmooze, an iOS app that uses artificial intelligence to lighten the mood by simulating a conversation – allowing people to work through their problems and emotions. Choice-O-Matic, an iPad game using positive reinforcement to teach young people how to open up about issues and start difficult conversations. Uplift, an iOS game that teaches young people how to manage stress by practising their breathing.   Co-founder and director of Outware Mobile, Gideon Kowadlo, told StartupSmart the company decided to focus on health because it is an industry it is passionate about and particularly interested in.   “We have some existing clients in the health industry and we wanted to explore the area further,” he says.   “It’s also an area that can directly provide benefit to people and we wanted to generate some ideas in this space. Despite the fact that there are barriers to entry in terms of regulation, there are still a lot you can achieve and there’s a lot of exciting potential out of mobile and new developments.”   Kowadlo says while a theme hasn’t been chosen for the next hackathon, it will likely focus on a specific sector as it’s a great way to “dive deeper” into how innovation can be applied to a particular industry.   “While there were 17 entries all together and only a small handful of winners, a huge number of others could equally become useful apps,” he says.   “The important thing is the conversation it starts – we had partners there that were providing the challenges and they were all energised by the day as well. And that then starts them thinking about what the possibilities are and also starts a conversation in their organisations and the community.”   Do you know more on this story or have a tip of your own? Raising capital or launching a startup? Let us know. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Why a robot could do your job

4:53AM | Monday, 20 April

Here’s a game to play over dinner. One person names a profession that they believe can’t be taken over by a machine, and another person has to make a case why it’s not so future-proof. We played this game on an upcoming episode of SBS’s Insight on the topic of the future of robots and artificial intelligence.   The first profession suggested was musician. An argument often put forwards against artificial intelligence (AI) is that computers can’t be creative. But there are plenty of examples to counter this argument. For instance, computers can take plain sheet music and turn it into an expressive jazz performance, as my colleague Ramon Lopez de Mantaras has shown.   So, jazz musicians watch out. Your jobs might not be safe from robot incursion.   The next option was police officer. It’s often said that computers can’t or won’t behave ethically. Unfortunately, Hollywood has already painted a very dystopian picture here in movies like Robocop and Terminator. And, as the current UN campaign to ban autonomous weapons demonstrates, we could easily end up there if we aren’t careful.   The third profession put forward was human resources. Naturally, this came from an HR consultant worried for her future job prospects. However, the bureaucratic side of HR is already easily automated. Indeed, we spend much of our lives on the phone already talking to machines. Can I speak to a real person, please?   On the other hand, the more human-facing side of HR is likely to be harder to automate. But as we argue in the next answer, it’s not clear that this will be impossible.   The fourth challenge was psychiatrist. Again, the human-facing nature of this would seem to offer significant resistance to automation. Nevertheless, there’s an interesting historical precedent.   A well known computer program called Eliza was the very first chatterbot. It unintentionally passed itself off as a real Rogerian psychotherapist.   Eliza was not very smart. Indeed, the program’s author, Joseph Weizenbaum, meant it more as parody that as therapist. However, his secretary famously asked to be left alone so she could talk in private to the chatterbot.   So, shrinks watch out. Your jobs might not be safe.   The final challenge was Prime Minister.   On the one hand, this is a good answer, as one assumes there’s little routine to being Prime Minister but a lot of tough high level decision making that would be tough for a machine to handle. On the other hand, it’s a poor winner of our little game. It may be the only job in the whole country that’s safe from robots.   In one final, beautiful irony, this forthcoming episode of Insight has the robots up on the stage. We, the supposed expert commentators were in the audience. So, even TV pundits should watch out. Your jobs might not be safe too. Net effects What this discussion highlights is that the middle classes are likely to be increasingly squeezed by machine labour. Professions that we used to think were quite safe – like doctor, lawyer or accountant – will be increasingly automated.   Whenever technology takes away jobs, it tends to make new jobs and industries elsewhere. For example, printing removed the need for scribes but created the vast publishing industry in its stead. And publishing went on to create many other jobs in the industries that grew out of all the knowledge passed on in printed material.   More recently, computers have taken away many traditional jobs in the printing industry, like type setters. But we now see many new jobs in areas like self-publishing and web design.   Economists continue to argue over the net effects of technology. Does technology create more economic activity so we are all better off? Or does it put more people out of work, concentrating wealth in the hands of the few?   One thing seems sure. It requires us to adapt. And for this, we need an educated, high tech workforce. This brings the conversation back to higher education and the stalled reforms that now trouble this sector in Australia.   If there is one policy we need to get right, to future-proof Australia against machines and other disruptions, I would argue, this is it.   This article was originally published on The Conversation. Read the original article.

THE NEWS WRAP: Ashton Kutcher launches new venture capital fund

3:30PM | Sunday, 15 March

Ashton Kutcher and his business partner Guy Oseary are launching a new venture capital fund called Sound Ventures.   TechCrunch reports the fund will be stage-agnostic, allowing the pair to invest in later-stage startups.   Kutcher has previously invested in companies such as Uber, Spotify and Airbnb through his first fund A-Grade Investments.   The actor and tech investor was in Australia last month for the Tech My Way conference, where he speculated that virtual and augmented reality, biotechnology and artificial intelligence were the next big things in tech. Controversial app developer slams critics An Aussie app developer who promised to give thousands of dollars to charity and was exposed for not handing over the money has hit back in a rambling Facebook post.   Belle Gibson, the founder of The Whole Pantry, solicited donations from around 200,000 people for various causes and said she would give away a quarter of her company’s profits – however, an investigation by The Age found no such contributions were ever made.   Now the entrepreneur has hit back, according to Fairfax, writing in a Facebook post that those who were speaking to the media about her were bullying “myself and my family”.   “I know the work my company and it’s [sic] contents did changed [sic] hundreds of thousands for the better,” she said. YouTube could be considering a subscription model for premium content YouTube could soon have its own paid video on demand service, according to The Verge.   The company is exploring the concept as a means to improve its bottom line and allow popular content producers to access a higher percentage of ad revenue.   The rumours come from an unnamed executive at a company that partners with YouTube to produce video content.   Competition between streaming providers has heated up in the past 12 months, with Netflix confirming it is launching in Australia on March 24 and taking on local companies Quickflix and EzyFlix. Overnight The Dow Jones Industrial Average is down 145.91 points, falling 0.82% overnight to 17,749.31. The Aussie dollar is currently trading at around 76.23 US cents.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

To stop the machines taking over we need to think about fuzzy logic

3:41AM | Wednesday, 11 March

Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.   The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.   One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.   We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.   Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.   If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.   This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.   These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency. Fuzzy sets and AI   One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.   In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.   But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.   The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).   For example, our human-manager from the earlier example may say to human resources:   IF the job applicant is aged 25 to 30 AND has a qualification in philosophy OR literature THEN arrange an interview.   This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.   But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:   IF the applicant is tall AND attractive THEN the salary offered should be higher.   The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?   Attractiveness is also subjective, even if we only account for the taste of the single human-manager. Grey areas and fuzzy sets   In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.   One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.   To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).   Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements. Human–robot symbiosis   An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.   For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.   In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.   This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).   Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.   In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it. An irrational fear?   This brings us back to the big fear of AI machines turning on us in the future.   The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.   The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.   These risks, however, are the same that come with any human-made system or object.   So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.   Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.   Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.   When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.   *This article originally appeared at The Conversation.

Meet the deep learning tools that can beat you at classic arcade games – without reading the manual

2:43AM | Thursday, 26 February

Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.   In a groundbreaking paper published today in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.   What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.   It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.   But by playing lots and lots of games many times over, the computer learnt first how to play, and then how to play well. A machine that learns from scratch This is the latest in a series of breakthroughs in deep learning, one of the hottest topics today in artificial intelligence (AI).   Actually, DeepMind isn’t the first such success at playing games. Twenty years ago a computer program known as TD-Gammon learnt to play backgammon at a super-human level also using a neural network.   But TD-Gammon never did so well at similar games such as chess, Go or checkers (draughts).   In a few years time, though, you’re likely to see such deep learning in your Google search results. Early last year, inspired by results like these, Google bought DeepMind for a reported UK£500 million.   Many other technology companies are spending big in this space.   Baidu, the “Chinese Google”, set up the Institute of Deep Learning and hired experts such as Stanford University professor Andrew Ng.   Facebook has set up its Artificial Intelligence Research Lab which is led by another deep learning expert, Yann LeCun.   And more recently Twitter acquired Madbits, another deep learning startup. What is the secret sauce behind deep learning? Geoffrey Hinton is one of the pioneers in this area, and is another recent Google hire. In an inspiring keynote talk at last month’s annual meeting of the Association for the Advancement of Artificial Intelligence, he outlined three main reasons for these recent breakthroughs.   First, lots of Central Processing Units (CPUs). These are not the sort of neural networks you can train at home. It takes thousands of CPUs to train the many layers of these networks. This requires some serious computing power.   In fact, a lot of progress is being made using the raw horse power of Graphics Processing Units (GPUs), the super fast chips that power graphics engines in the very same arcade games.   Second, lots of data. The deep neural network plays the arcade game millions of times.   Third, a couple of nifty tricks for speeding up the learning such as training a collection of networks rather than a single one. Think the wisdom of crowds. What will deep learning be good for? Despite all the excitement though about deep learning technologies there are some limitations over what it can do.   Deep learning appears to be good for low level tasks that we do without much thinking. Recognising a cat in a picture, understanding some speech on the phone or playing an arcade game like an expert.   These are all tasks we have “compiled” down into our own marvellous neural networks.   Cutting through the hype, it’s much less clear if deep learning will be so good at high level reasoning. This includes proving difficult mathematical theorems, optimising a complex supply chain or scheduling all the planes in an airline. Where next for deep learning? Deep learning is sure to turn up in a browser or smartphone near you before too long. We will see products such as a super smart Siri that simplifies your life by predicting your next desire.   But I suspect there will eventually be a deep learning backlash in a few years time when we run into the limitations of this technology. Especially if more deep learning startups sell for hundreds of millions of dollars. It will be hard to meet the expectations that all these dollars entail.   Nevertheless, deep learning looks set to be another piece of the AI jigsaw. Putting these and other pieces together will see much of what we humans do replicated by computers.   If you want to hear more about the future of AI, I invite you to the Next Big Thing Summit in Melbourne on April 21, 2015. This is part of the two-day CONNECT conference taking place in the Victorian capital.   Along with AI experts such as Sebastian Thrun and Rodney Brooks, I will be trying to predict where all of this is taking us.   And if you’re feeling nostaglic and want to try your hand out at one of these games, go to Google Images and search for “atari breakout” (or follow this link). You’ll get a browser version of the Atari classic to play.   And once you’re an expert at Breakout, you might want to head to Atari’s arcade website.   This article was originally published on The Conversation. Read the original article.

Is Apple making an electric car to battle Tesla, Google or Climate Change?

2:10AM | Wednesday, 18 February

If you thought it has been a while since you heard any more rumours about the long-awaited Apple TV, they are about to be replaced by even more exciting possibility that Apple may be about to build an electric car. The Wall Steet Journal kicked things off with a report that Apple had been hiring “hundreds” of staff with automotive design skills to work on a project called “Titan” that may be a self-driving electric vehicle configured in a (not-so-exciting) mini-van design.   There are several back-stories to this potential move by Apple. In one, we see continuing competition with rival Google, who has been working on a driverless car for some time and are saying that they will be launching a commercial version onto the market between 2017 and 2020. Google’s motivation behind the self-driving car has been the development of the artificial intelligence software capable of pulling off this feat. Even if the car is not successful, the AI software will have a range of applications and possibility that would make the project still worthwhile. Increasingly, Apple has shown its willingness to develop its own capability in a range of competitive technologies that it can incorporate into products.   In another back-story, there is electric car company Tesla whose CEO, Elon Musk, has claimed that it will be as big financially, as Apple, within a decade. This will in part be based on the release of the Model 3, an affordable (US $35,000) family car with a range of 200 miles. Part of Tesla’s strategy appears to include the poaching of numerous Apple staff. Although it seems that Apple has been reciprocating by offering Tesla staff large signing bonuses to move to Apple.   And finally there is the view that electric cars, self-driving or otherwise, represent the future of transportation, especially a climate-friendly and sustainable one. At first sight, this may be a bit hard to believe when you consider that the top 3 selling vehicles in the US in 2014 were “pickup trucks”. At the same time, hybrid electric vehicles represented less than 3% of all cars sold. Still, there is continuing interest by the car manufacturers in producing electric cars, if only as a hedge. GM has announced their new 200 mile range Chevy Bolt that will retail at around the same price as Tesla’s Model 3.   There is little doubt that Apple could move into car manufacturing. With US $180 billion in cash, it could buy Fiat Chrysler, Tesla, General Motors and Ford outright.   There is also no doubt that with its ability to bring design and innovative computing to an industry employing technology that significantly lags that found in an iPhone. Apple and Google have both made moves to create in-vehicle media interfaces based on their systems. Apple’s CarPlay will start to appear in cars this year. Customers who can’t wait can buy after-market devices from Pioneer.   Apple’s motivation to build an electric car may be driven by competition with Google, Tesla and others. It may be also finding a new business that doubles its value to $1.3 trillion as predicted by Carl Icahn. Alternatively however, it may be genuinely interested in building a technology that makes driving more sustainable and less dependent on oil. Apple is set to invest $3 billion in new solar farms in California and Arizona to provide energy for its operations there. Apple CEO Tim Cook recently told investors:   “We know that climate change is real,” Cook said on Tuesday. “Our view is that the time for talk has passed, and the time for action is now. We’ve shown that with what we’ve done.”   Whether Apple’s electric cars are aimed at combating climate change will depend on how they are manufactured and how the recharging infrastructure, which is still largely to be built in the US and globally, is run. Apple throwing its weight behind this infrastructure being built at all would certainly help making electric cars a more popular possibility.   This article was originally published at The Conversation.

Data mining the new black box of self-driving cars

10:36AM | Tuesday, 21 October

Autonomous vehicles, or self-driving cars, are likely to be seen more widely on roads in 2015.   Already, legislation authorising the use of autonomous vehicles has been introduced in the US states of Nevada, Florida, California and Michigan, with similar legislation being planned for the UK. To date, these laws have focused on legalising the use of autonomous vehicles and dealing, to an extent, with some of the complex issues relating to liability for accidents.   But as with other emerging disruptive technologies, such as drones and wearables, it is essential that issues relating to user privacy and data security are properly addressed prior to the technologies being generally deployed.   Understanding autonomous vehicles   There is no single, uniform design for autonomous vehicles. Rather, it is best to understand an autonomous vehicle as a particular configuration of a combination of applications, some of which – such as adaptive cruise control, lane departure warnings, collision avoidance and parking assistance – are already part of current car design.   The most well-known prototype, Google’s self-driving car, uses a variety of technologies, including: a laser range finder (LIDAR) that generates a detailed 3D map of the environment; radars; cameras for detecting traffic lights; and a GPS. Other projects, including prototypes being developed by Mercedes-Benz, Volkswagen, Toyota and Oxford University, use different combinations of technologies.   This means that the privacy and data security problems arising from autonomous vehicles depend upon the precise technologies applied in any particular design. Some generalisations are, however, possible.   The relationship between the virtual and the real   The rules (or “code”) governing the online world have been different to those that apply offline. For example, online activities invariably generate digital traces, including metadata, which can be used to build profiles of users.   With emerging technologies, such as drones, wearables and autonomous vehicles, we are increasingly seeing the transposition of virtual models onto the real. One consequence of the range of sensors and data collection devices being deployed (and interconnected) is that our offline activities can leave traces at least as extensive as those generated online.   One way to understand types of autonomous vehicles is by reference to the kind of data collected and the ways in which that data is processed. For instance, autonomous vehicles often incorporate event recorders, or “black boxes”, to provide essential information in the event of an accident. This raises questions about who has rights to this data and about who can have access to the data.   Anonymising data   There is an overlap here with questions of liability, as insurance companies have clear incentives to collect as much data about user behaviour as possible. The potential for intrusive surveillance of personal activities is particularly jarring, as the car has been an archetypal space of personal privacy and freedom.   A fundamental distinction must be drawn between self-contained autonomous vehicles, in which the data collected from sensor devices installed in the car are stored and processed in the vehicle itself, and interconnected vehicles, in which data is shared with a centralised server and, potentially, with other vehicles.   Regardless of whether a vehicle is self-contained or interconnected, design decisions have to be made about whether or not the data collected is anonymised or linked to individual users. If the data is not anonymised, especially with interconnected vehicles, this poses serious surveillance threats. After all, once the data exists, and especially if it is connected to a server, it is vulnerable to access by third parties.   It is possible to envisage implementations of autonomous vehicles where data about a particular user is linked to other data sources, such as an online profile, for purposes such as tracking or marketing. This might take the form of personalised advertising displayed in the car, or even adjusting a vehicle’s route so that it passes retail outlets which match a user’s imputed preferences.   What else is at stake: human autonomy and hacking   We are now familiar with technologies, such as predictive search, which in the online context, attempt to predict what we want to do and make more or less persuasive suggestions.   It is likely that some versions of autonomous vehicles will implement predictive technologies. In any case, the progressive delegation of human decisions to machines raises system-wide questions about the cumulative impact on human autonomy: the more people are habituated to decisions being made for them, the less likely they may be to make their own decisions.   We are also now depressingly familiar with the vulnerability of computer systems to malicious third parties. Just as effective data security is essential to online safety, autonomous vehicles must be designed with a high level of data security, especially given the potentially calamitous consequences of hacked vehicles. As interconnected data processing systems are progressively rolled out in applications such as wearables and autonomous vehicles, we seem likely to see an offline version of the same sort of perpetual guerrilla warfare played out online between information security and hackers.   Protecting privacy at the design stage   Autonomous vehicles promise significant social and economic benefits, especially in potential improvements to road safety. There are, nevertheless, considerable legal and regulatory challenges. As with other emerging disruptive technologies, it is vital that privacy and anonymity be properly protected at the design stage.   To date, in the face of significant challenges relating to the legality of autonomous vehicles and liability issues, the privacy rights of users have been relatively neglected. But unless the era of artificial intelligence is to be accompanied by us sleepwalking into ubiquitous surveillance, we must recognise that safety and security needs to be balanced against the legitimate rights of people to control their own data and to retain their fundamental rights to privacy.   David Lindsay is a board member of the Australian Privacy Foundation. This article was originally published on The Conversation. Read the original article.

From art to science: Interest in robotics is on the move, but finance remains an issue

10:37AM | Thursday, 2 October

The cofounder of a pioneering Sydney-based robotics startup, with a Powerhouse Museum display and a successful crowdfunding campaign under its belt, says the sector is set to get much bigger but finance for projects remains an issue.   Robological cofounder Damith Herath told Private Media there are a number of exciting robotics startups founded by Australians, including Marathon Robotics and Navisens, and the sector is gaining momentum globally.   “It’s kinda like the ‘70s in computing and the ‘90s in the web. It’s the same feeling in the robotics community and the general consensus is it’s getting a lot bigger,” Herath says.   “A few good examples are some of the startups Google has recently purchased, or Baxter, or Cynthia Breazeal, who quit her job at MIT to do a startup called Jibo and raised $2 million on Indiegogo.   “But we have to be careful, because a lot of people over-promise and under deliver. Robots will move into other spaces, though not in the anthropomorphic sense.   “One of the issues is finding people to finance you is tricky, especially for hardware. People are more comfortable with apps and things that get a quick return on their investment.”   In January, Robological raised $3031 on Indiegogo for Ro-buddy, a pre-built board that integrates with an Android app, making it easy to build a robot without needing to learn a programming language such as C. Herath says the startup is finalising the board for fabrication in China.   “You can build a Raspberry Pi robot straight away, plug in a camera and motors, and within 10 to 20 minutes you have a spy cam working with the Android app,” he says.   “We think it’s useful because it’s in the pro-maker space, but it’s not as complex as Arduino. So if you’re building something in home automation, you can get something going with Android.”   Aside from Ro-buddy, Herath says Robological does consulting and research work, including working as a research partner with the Australian distributor for Rethink Robotics’ Baxter robot and on Curtin University’s Alternative Anatomies project.   It is also “chipping away” on a variation of the cloud-based internet of things robotics ideas put forward by UC Berkley professor Ken Goldberg, although Herath is remaining tight-lipped about what the project involves.   The startup began with a robotics display called the Articulated Head, which was on exhibit for two years at Sydney’s Powerhouse Museum.   “The three founders – Zhengzhi Zhang, Christian Kroos and I – met at the University of Western Sydney six years ago on a project called Thinking Ahead, which was a project of the Australian Research Council into AI (artificial intelligence).   “We each had a slightly different background, myself with robotics engineering, Zhang with software engineering and Christian with linguistic and cognitive science.   “Stelarc is one of the top performing artists in the world; an Australian artist who’s done a lot of work with robotics on stage and theatre. And the project I worked on was conceived of by Stelarc.”   The project ended when funding ended, but this allowed the team to develop valuable intellectual property on robots and human interaction. The founders decided to form Robological to continue their research.   One of its first projects was called Adopt a Robot, a research project looking into interactions between humans and robots.   “It got a lot of good publicity because it captured the public imagination. We gave away seven robots and over six months we changed its behaviour and added a face… Each person who got a robot had to care for it and fill out a questionnaire every four to six weeks,” Herath says.   Next month, Robological will jointly organise a workshop on robots and art with Curtin University as part of the Sixth International Conference on Social Robotics in Sydney. Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Forget the clutch, self-driving cars need ‘adjustable ethics’ set by owners

9:34AM | Wednesday, 10 September

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car’s actions.   One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.   People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.   Self-drive is already here   With self-driving vehicles already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.   Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like adaptive cruise control, automatic braking, lane-keeping and parking assist.   People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.   Are they safe?   After almost 500,000km of on-road trials in the US, Google’s test cars have not been in a single accident while under computer control.       Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.   The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.   This is an adaptation of the “trolley problem” that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.   An astute reader will point out that, under normal conditions, the car’s collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.   Story continues on page 2. Please click below. Who is to blame for the deaths?   If car makers install a “do least harm” instruction and the car kills someone, they create legal liability for themselves. The car’s AI has decided that a person shall be sacrificed for the greater good.   Had the car’s AI not intervened, it’s still possible people would have died, but it would have been you that killed them, not the car maker.   Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.   As Patrick Lin points out the options are many. You could be:   democratic and specify that everyone has equal value pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example self-centred and specify that your life should be preserved above all materialistic and choose the action that involves the least property damage or legal liability.   While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user’s choices – though the maker could still be faulted for giving the user a choice in the first place.   Let’s say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.   People want choice   Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.   About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.   In Lin's view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker’s liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.   In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.   David Tuffley does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations. This article was originally published on The Conversation. Read the original article.   Follow StartupSmart on Facebook, Twitter, and LinkedIn.

Best of the Web: Pivots, robots and the end of sleep

6:18PM | Sunday, 22 June

No sleep needed: New technologies are emerging that could radically reduce our need to sleep - if we can bear to use them, writes Jessa Gamble for aeon magazine.   Imagine a disease that cuts your conscious life by one-third. You would clamour for a cure. We’re talking about sleep. There may be no cure yet for sleep, but the palliatives are getting better.   “Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day,” Gamble writes.   “To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.”   Dilbert does startup: When Dilbert cartoonist Scott Adams turned himself to entrepreneurship, he wasn’t prepared for some of the weirder ways of Silicon Valley. Describing himself as an “embedded journalist” this week he takes on the pivot.   “The Internet is no longer a technology,” Adams writes.   “The Internet is a psychology experiment. Building a product for the Internet is the easy part.   “Getting people to understand the product and use it is the hard part. The only way to make the hard part work is by testing one hypothesis after another. Every entrepreneur is a behavioral psychologist with the tools to pull it off.”   And he’s distilled it all down in “the system”, which looks like this:   1.      Form a team 2.      Slap together an idea and put it on the Internet. 3.      Collect data on user behavior 4.      Adjust, pivot, and try again   What the gospel of innovation gets wrong: “In the last years of the nineteen-eighties, I worked not at startups but at what might be called finish-downs,” write Jill Lepore in a piece titled ‘The Disruption Machine’ in The New Yorker.   Lepore’s thesis is that Clayton Christensen’s theory of disruption, accepted across American industry as “the gospel of innovation”, is wobbly at best because it rests on a group of handpicked case studies that prove little or nothing.   “Most of the entrant firms celebrated by Christensen as triumphant disrupters no longer exist, their success having been in some cases brief and in others illusory,” writes Lepore.   Anyone who has anything to do with the startup industry will relate to this point:   “Ever since “The Innovator’s Dilemma,” everyone is either disrupting or being disrupted,” she writes.   “There are disruption consultants, disruption conferences, and disruption seminars. This fall, the University of Southern California is opening a new program: “The degree is in disruption,” the university announced. “Disrupt or be disrupted,” the venture capitalist Josh Linkner warns in a new book, “The Road to Reinvention,” in which he argues that “fickle consumer trends, friction-free markets, and political unrest,” along with “dizzying speed, exponential complexity, and mind-numbing technology advances,” mean that the time has come to panic as you’ve never panicked before.”   Don’t worry about the robots: Venture capitalist Marc Andreessen does not believe that robots will eat jobs.   “Robots and AI are not nearly as powerful and sophisticated as people are starting to fear, writes Andreessen,   “With my venture capital hat on I wish they were, but they’re not. There are enormous gaps between what we want them to do, and what they can do. There is still an enormous gap between what many people do in jobs today, and what robots and AI can replace. There will be for decades.”   Image credit: Flickr/jdhancock

US start-up raises $11.5 million for web-connected teddy bear

3:55AM | Monday, 11 March

The struggles of the Australian toy market have been put into sharp focus by a US-based start-up founded by a former Pixar executive, which has raised $11.5 million for an internet-connected, artificially intelligent teddy bear.

How start-ups cured my big business boredom

9:59AM | Wednesday, 26 September

David Urpani doesn’t like to stay still for long. He went from being an architect to a doctor of artificial intelligence to the founder of insurance comparison giant iSelect in 2000.

Consumers trade old mobile phones for cash using ecoATM

9:36AM | Tuesday, 25 September

A new recycling “ATM” will take an old mobile phone and pay an agreed price on the spot, taking the concept of bartering to a new level.

Five sectors set to thrive beyond the mining boom

7:44AM | Thursday, 26 July

Concerns were raised by economic soothsayers this week when a new report predicted the end of the mining boom – Australia’s runaway success story – within two years.

Australian space start-up paves way for the final frontier

7:55AM | Monday, 16 July

The director of Saber Astronautics has highlighted opportunities in Australia’s fledgling space industry, after his company was chosen as a finalist in the NewSpace Business Plan Competition.

Sound it out with speech technology

8:09AM | Monday, 1 August

Artificially intelligent machines that can converse and argue with humans are just years away, according to scientists in the United Kingdom, as speech technology starts to take off.

prev
1
next
loading...
loading...
loading...