Tag Archives: machine learning

The Future of Advertising

I have been thinking about the world of advertising in the age of social media. No longer do we consume advertisements exclusively through television, print and billboards; we have many media channels and opportunities to learn about new products. Customized ads are pushed to our computers and smartphones, sometimes taking advantage of our proximity to a particular retail outlet. Advertisers have to divide their dollars much differently in the 21st century but have the opportunity to target a much narrower demographic with their pitch.

A recent article in my local paper highlights how this advertising shift is compounded by an array of new technologies. Retailers and manufacturers can now use technology to custom deliver advertising to consumers, even from a billboard. In this blog post I explore some of the technologies available and in development to help advertisers convert their message into sales.

Smart Billboards

Traditional billboards are static, giant advertisements that reach every driver in a shotgun approach. It is a one size fits all model and while they are potentially reaching thousands of drivers every day, depending on their location, the sales conversion rate is fairly low. The next step was to create digital billboards which can shuffle through several ads in hopes of appealing to a range of drivers. There is one on an interstate near me that is very bright and annoying, especially at night. This, like the static billboard, is random in that they are targeting a very broad demographic that may be on the highway at a particular time of day.

Smart billboards are an attempt to remove the randomness. Synaps Labs has created the first smart billboard in Moscow and will bring their technology to the U.S. sometime this year. This billboard is a combination of connected cameras and machine learning. Cameras are set up ahead of the billboard and when a particular model of car is detected, the billboard will display an advertisement targeted at that driver. The billboard in Moscow had ads for Jaguar cars. The advertisers decided that particular brands of Volvos and BMWs housed drivers that may be enticed to switch to Jaguar. Advertisers are still making demographic assumptions based on a car model but they are narrowing their target audience. The picture also changes depending on whether it is night or day or summer or winter. An advertiser could play with many variables at once. Going beyond the billboard, they could also push the same ad to the driver’s cellphone as an extension or reiteration of the message.

The Future of Billboards

Advertisers are looking forward to a world of autonomous vehicles where drivers/riders have the freedom to look around instead of concentrating on the road. In this future, a consumer can follow up on the impulse to purchase the advertised item while still in the car. Better yet, with a corresponding push to the smartphone, that purchase could be only one click away. While this is intriguing to advertisers, they are asking a fundamental question about consumer behavior: when riders are free to do and look at anything, will they actually be concentrating on billboards or will they be buried in their smartphone or on-board entertainment system?

Thoughts

With modern technologies there are many possible outcomes and it will take a lot of trial and error until we understand how people will behave. Do you think targeted ads on billboards would sway you? Does your car really represent your demographic, or is that grasping at straws? What is the future of advertising in the digital world? Do you think that we are becoming more discerning consumers? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Robot Companions for Seniors

Photograph of smiling elderly woman using a tablet computer.Medical technology is allowing us to live longer but increased longevity also means more of us will live alone. Our average life expectancy is rising but we will not all live to be 100 or older. For seniors living alone, there are now solutions to help with basic living, scheduling, and social tasks that can help keep them independent.

Robot Companions

Isolation is a problem for many people living alone. They may be unable to get out to interact with other people or they simply may have no desire to do so. This is where robots could help. Intuition Robotics has recently introduced ElliQ, an artificial intelligence (AI) robot that interacts with seniors. While this robot does not have traditional arms and legs it is designed to keep seniors in touch with others and help them track appointments and even suggest activities. Most importantly, it works through a natural speech interface. It communicates through a combination of lights and sounds and voice. Because it incorporates machine learning, or AI, it learns habits and preferences and helps set and remember daily schedules and routines.

ElliQ is designed to be a fixed robot but other robots, such as Softbank’s Pepper, are mobile. At this time it can only carry the built-in tablet which acts as its interface, but it can follow or get to people who are less mobile. This is a relatively new device that is starting to be used in retail shops to interact with customers.

Robokind has developed Milo, which is a combination of ElliQ and Pepper but with more humanlike limbs and facial expressions. It accepts voice input and interacts with people through natural voice output and body language. Milo is being touted for seniors and those living alone and for people on the autism spectrum who can benefit from his personal interaction.

Possibilities

I can think of other benefits of these robots. They could aid and encourage music practice. For example, they could be programmed to be a metronome while I practice an instrument. Better yet, they could provide another part of the music that I am playing. For example, if I play the guitar, perhaps the robot could play bass violin or another part to accompany me. Another use could be practicing or learning a foreign language. With the right programming, the robot could provide many components of good language learning courses—lessons, immersion, repetitive practice, immediate feedback and correction.

All of these things keep the mind active and hopefully slow the inevitable aging process. Repetitive tasks such as music or language lessons can increase brain activity and general life satisfaction. With the aid of technology, those extra years can be rich and rewarding.

Thoughts

Can you think of other applications that would help seniors, particularly those living alone? Will robot apps become a new industry? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Artificial Intelligence Applications

Artificial intelligence (AI) will continue to contribute to innovations this year. I think some industries will embrace the change and some will resist for various reasons, including job displacement and trust. Our world is changing already in terms of the tasks that computers take on. Let’s examine some of the ways that AI will change how we work in 2017 and beyond.

Definitions

AI is simply a set of cognitive tasks that can be handled by a computer. Some AI functions incorporate vision and robotics but do not necessarily resemble Arnold Schwarzenegger’s dangerous “Terminator” character. Think of the hundreds of decisions that you make every day and which of those decisions could be best made by a computer, thus freeing you up for more creative and innovative tasks. Another term associated with AI is machine learning. That is the ability of a computer to learn from past cognitive decisions and make corrective choices, similar to how we learn from our mistakes and change our thinking in order to produce a better outcome.

Security

In a recent InformationWeek article, the author is hopeful that AI advances will help solve a skills shortage in the cyber security field. Right now, computers are used to gather data on threats and potential threats to weed out erroneous information and help security professionals formulate a mitigation strategy. In the future, the computer will also be left to formulate and institute the threat response as well as gather the initial data. Far from displacing security personnel, it will free them up to work on higher level tasks such as business continuity and refining the data collected and filtered. In this case, AI provides another pair of hands but security professionals will continue to be in as high demand as they are now.

Automotive Applications

One of the AI applications I am most excited about is automotive. I have written about this in the past and there have been some real breakthroughs recently. One practical application of AI is Ford’s new Pro Trailer Backup Assist. I cannot back up a trailer to save my life; I was denied that gene when I was born. Somehow the trailer appears at my side whenever I try to back into a spot. With backup assist, the driver removes their hands from the steering wheel completely and backs up by using a small knob on the dash. Turn the knob to the right and the trailer moves to the right. This is just the opposite of trying to use the steering wheel and certainly much more intuitive. This is an example of machine learning using vision and computing algorithms. Another even more radical example is the upcoming autonomous vehicle. These vehicles make constant decisions based on sensor input from around the vehicle to safely transport a passenger.

Danger Zones

Robots using machine learning differ from simple drones in that they make independent decisions based on past experience. A drone is controlled by a human operator and cannot function independently. An example of independent robot development is CHIMP from Carnegie Mellon University. CHIMP will be used in industrial application and for search and rescue when the situation is too dangerous for humans. It makes decisions based on instructions, experience, and multiple sensor input.

Thoughts

These are just a few AI applications, with a lot more to come. Are there tasks or decisions that you would just as soon leave to a computer? Do you trust the systems to make those decisions? This is a brave new world and it will take a leap of faith before some of these developments become completely commercialized. Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Artificial Intelligence Applications in Medicine

Robot holds medical vial.I am currently enrolled in a MOOC on machine learning and am intrigued by the integral role computers played in decoding DNA sequencing and their ongoing role in medical research. Machine learning focuses on learning through repetition, pattern recognition, and algorithms as opposed to programmed instructions. The aim is for computers to learn from previous experiences and add that knowledge to a growing database, much as humans do. As that database grows, machines can take on even more complex tasks.

DNA decoding is just one application in the field of machine learning. I am curious about what other areas, particularly in medicine, will benefit from these algorithms. Can a computer, or group of computers, do cheaper and faster diagnostics? It turns out others are asking these same questions and exploring the benefits and applications of machine learning. What are the benefits for us as patients and how does it change the health care field?

Automated Sampling

As part of a drive to simplify procedures and cut costs, startups such as Theranos developed automated procedures for blood tests. Their procedures and equipment are proprietary but involve sensors and computer algorithms to augment or replace human processing. The cost of their tests is much lower than traditional tests and a patient can get quicker results. As of this writing, the company is under intense scrutiny to reveal their specific technologies and processes. Federal regulatory agencies, particularly those providing Medicaid, are trying to ensure the testing process is safe and the results are accurate. As with any new technology or process, the consumer must go in with eyes open and understand the risks involved. One question to ask is how much do I trust the results? If patients can stop by a drugstore for a blood test, how and when do physicians get involved? With ready access to testing, will patients become more involved in their own health care and treatment decisions?

Radiology

In a recent article on digital diagnoses, it is estimated that radiologists, at least in Australia, review seven times more cases than they did five years ago. As with a pathologist providing lab analysis, all of these reports rely on pattern recognition and they take a great deal of skill and experience to do well. As we make advances in machine learning and artificial intelligence, computers can take over some of that workload. If I were a radiologist, I think I would welcome the chance to offload some of my tasks. I am not suggesting that these highly trained medical professionals will be replaced, but there is room for assistance through technology.

Thoughts

I wrote a blog post last year on robots in the surgery and I asked if you would trust a surgeon-directed robot to operate on you. What if the human-robot team had a higher success rate than a human? These same questions apply to medical tests read and interpreted by a computer. Would you trust the diagnosis more or less than if it came solely from a highly trained doctor or technician?

To me it comes down to a matter of trust. Do I trust a machine to take on some of my tasks and perform some of the tests that were previously done solely by humans? With machine learning, computers improve through iterations and experience. In other words, they learn from their mistakes and successes, just like we do. This is a brave new world. Are you ready to embrace it? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.