Monthly Archives: October 2017

Holiday Display Technology

How did we get from Christmas lights with big C9 bulbs that threatened to burn down our tree to extravagant computer controlled LED light displays complete with choreographed music and projections? How did we get from stuffed scarecrows at Halloween to motion detected performances on our doorsteps? Holiday decorations have taken advantage of innovations to make the holidays even more festive or scary, depending on the purpose. This blog post will explore some of the latest in holiday technology that you may need.

Retail Display Technology

Large retailers such as Macy’s and Saks Fifth Avenue go to great lengths to create elaborate holiday displays. Now, thanks to Google, you can view a number of retailer holiday displays without leaving your chair. Using StreetView technology, engineers have filmed window displays and formed an interactive experience. This is a new technology developed by Google marketing for retailers that already advertise with Google.

History

The first holiday light display was put up in the late 1800s as a way to replace burning candles on trees. Some of them required generators since electricity was not yet prevalent in some areas. Commercial light sets became available and affordable around 1917. Aluminum trees were introduced in the 1950s and 1960s but could not be used with lights so instead used a rotating color wheel to splash color onto the ornaments. The mini-bulb in the 1970s brought back traditional lighting inside and out and was more energy efficient. While the mini-bulb is still used, LED lighting is making a push into mainstream lighting displays. LED bulbs can now be programmed to change colors and create ever more extravagant light displays. These can be paired with an app to to direct a light display remotely. Who knows if your holiday lights may be hacked in the future?

Festive Laser Lights

Over the last couple of years, laser light projectors have started to augment or replace traditional outdoor holiday lights. These are basically red and green lasers that are projected onto a home or trees. The laser projections are fractured so that you get multiple points of lights. Originally these came as static displays but they are available now as motion lights with options for different patterns in red or green or both. These could replace static LED or mini light strings that have to be installed and taken down every year. It remains to be seen how your neighbors will accept this product, especially if your motion light pattern accidentally shines on their house or car. Also, there are warnings not to shine these up in the air within 10 miles of an airport. New this year are full spectrum white lasers and the option to display more than just red or green. This can extend the light display to Halloween, Fourth of July, or other holidays. The technology continues to be refined and the quality and accuracy of these displays are improving.

Choreographed Light Display

Choreographed light displays have been a big hit on YouTube over the last couple of years. These are lights synchronized with music played either over a loud speaker or through an FM radio signal. The controller can be something as simple as a Raspberry Pi. Judging from videos online, the sophistication and sheer volume of lights used in these displays seems to be growing. Perfect for the competitive techie.

Thoughts

We have come a long way from candles on the Christmas tree to light and music extravaganzas in neighborhoods. Ever bigger, brighter, and more sophisticated. I don’t know where we go from here for the next cool lighting tech but I value your opinion. Let me know what you think.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Open Source vs. Commercial Applications

In 2004 the Munich, Germany city council voted to migrate city PCs away from Microsoft applications and operating systems and move to a version of open source Linux. It was a nine-year project that resulted in nearly 20,000 PCs being converted to an open source platform. They still have approximately 4,000 PCs running Windows for critical Windows based applications. The project was hailed as a success just last year, but there appears now to be question as to whether they are getting the cost savings and efficiencies they expected. Some are questioning whether ongoing application compatibility issues are worth continuing this push.

The city of Munich is beginning an Exchange migration away from open sourced Kolab, which was used for calendaring and email. It appears that an Outlook migration is in the works, opening the door to the beginning of a return to Microsoft products. A study is underway to determine the cost of moving city PCs to a Windows 10 platform. Informed by the study results, the city council will vote in November on whether to migrate back to Windows. Why the reversal? Is it impossible to run an open source environment that is compatible with other commercial applications? Does it take a special IT skillset to be successful? These are the questions going through my mind.

Politics

This is becoming a political issue as well as one of cost and productivity. The ruling party is pushing to move to Windows, citing employee dissatisfaction. The opposition party is moving to stay the course with Linux based systems in order to take advantage of the investment costs already incurred. To muddy the waters, there is also a question of IT efficiency and effectiveness. When the open source migration began after 2004, there was a parallel push to centralize IT from local organizations. They ended up centralizing into three IT functions and some employees claim that it is this centralization that is causing dissatisfaction with IT and not the open source software. The issues appear to run deeper than commercial vs. open source software.

Compatibility

I am a fan of open source software and would love to see a wide-scale installation succeed. The Munich migration, touted as “LiMux, The IT Evolution,” was one of the largest installations of open source software. With all of the finger pointing going on,  it is unclear whether the problems lie with the Linux operating system, with open sourced applications such as OpenOffice, or with the IT organization. Because Linux and open source applications are not the predominant design in the world of IT, they will continue to play a minority role and will continue to have issues with compatibility. These are issues that we discuss at length in our innovations course. Open source is not the “safe play” by an IT department but can be cost effective and worth the time invested.

Thoughts

Are you using open source applications or operating systems in your organization? What are the tradeoffs, if any, between cost, ease of use, reliability and compatibility? Would you recommend open source applications and operating systems to others? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Our Brains on Technology

A recent University College London study suggests that overuse of satellite navigation systems, or GPS, is actually shutting off parts of our brain. Researchers say that the pre-frontal cortex and hippocampus sections of the brain are stimulated when navigating streets and choosing potential routes but are turned off when following GPS prompts. Just as we develop muscles in our body through exercise, mental activity activates parts of our brain. The authors of this study don’t claim that the evidence is conclusive but it leads me to wonder what other brain functions are not being exercised because of our use of technology. This post is dedicated to the idea of a balanced, not blind approach to technology.

Evolutionary Changes

Could it be true that our brains are changing due to emerging technologies? If so, what implications does that have? Is it a net loss in intelligence or is it simply that one area of the brain gets stronger while another gets weaker? I wonder if early society worried about changes when we went from primarily a spoken language to a spoken and written language. Would we get lazy because we no longer had to remember the oral traditions of our forefathers to pass on to future generations? How did writing change us as individuals and as a society? In the same vein, how are digital technologies changing us today? Are we becoming net smarter? So many questions.

London Taxis

A 2011 report highlights biological changes in the brain structure of London taxi drivers. The study shows that these drivers, who study London maps for three to four years before their licensing examination, have increased activity and capacity in one section of their brain but decreased capacity in another part. In other words, by studying routings of London’s 25,000 streets their spatial skills increased but other cognitive functional capacity was lost. They are obviously good at their jobs so is the shift in their cognitive abilities a bad thing or is it just different?

Is Google Making Us Stupid?

In a 2008 article in The Atlantic, Harvard Business Review Editor Nicholas Carr asks a similar question when he muses whether Google is making us stupid. To be more precise, he questions whether search engines are changing our reading and study habits and pulling us away from deep reading. He cites his own growing inability to read a long article or an entire book because of his habit of skimming many sources instead of concentrating on one paper or book. He asks the same questions that I pose. Is this change in our cognitive ability good, bad or indifferent? Several studies point to the human brain’s incredible plasticity and ability to adapt to changing stimuli so perhaps the answer is simply that it is different and perhaps evolutionary.

Thoughts

New technologies are changing the way we live our lives and perform everyday tasks. I think it is worth asking whether it is changing our habits and thinking for the better or is it just simply change, neither good nor bad. Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Filling the Cybersecurity Talent Pool

I seem to see a new article weekly raising the alarm about the number of unfilled cybersecurity jobs. A 2015 report from (ISC)2 projects the shortfall to rise to 1.5 million worldwide by 2020. A recent Harvard Business Review article highlighted the gap in the number of skilled cybersecurity professionals and offered some insight into how we can bridge that gap through educational programs and by hiring non-traditional employees. My aim with this post is to start a dialogue on creative ways to attract fresh minds and new faces to the field.

Traits

First of all, what traits are most desired in a security professional? I would submit that a strong sense of curiosity is important. Those creating hacks and spreading malware are certainly curious about how much trouble they can cause so it stands to reason that those tasked with detecting intrusions should also be curious. The next question is are people born curious or can it be learned? The authors of a 2015 Fast Company article suggest that we are all born curious but many lose their sense of curiosity, and it can be regained through discipline.

It is also important to have a keen sense of patterns. I believe that everyone seeks out patterns in order to make sense of chaos but some have an innate sense of irregularities that others cannot see. As pointed out in the Harvard Business Review article, machine learning is augmenting that pattern searching and discovery but it will still take human intelligence to find security anomalies.

Education

In order to train and retain more cybersecurity professionals we are going to have to change our thinking on where they come from. They don’t necessarily all come with a four year computer science degree in their pocket. Some do have that credential to be sure and they excel in the field, but we are going to have to cast a wider net in order to fill the gap. When I think of the traits of curiosity and pattern recognition I think of trained musicians. Is it possible that someone could be a security expert during the day and a musician at night or vice versa? Do we need to look closer at how we match up hobbies and vocations? Can the lines be blurred between the two?

Harvard offers an eight week introductory online course in cybersecurity through HarvardX. This is one of several online courses that allow a prospective professional to test the waters. This is a great way to match up potential security enthusiasts with information on the field. A graduate of this course may decide to go on and take advanced courses either online or at a nearby college training center. This will hopefully lead to certifications and a job offer in the field. As employers facing a skills shortage, it is important to be flexible in who we seek and how we view their academic and professional background. Perhaps expanded internships are in order for the right candidate.

Thoughts

These ideas can apply to other fields facing employee shortages but I think it is important to stay flexible on who we view as potential hires. If we continue to look at a narrow pool of candidates this gap is only going to grow. Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

The Future of Advertising

I have been thinking about the world of advertising in the age of social media. No longer do we consume advertisements exclusively through television, print and billboards; we have many media channels and opportunities to learn about new products. Customized ads are pushed to our computers and smartphones, sometimes taking advantage of our proximity to a particular retail outlet. Advertisers have to divide their dollars much differently in the 21st century but have the opportunity to target a much narrower demographic with their pitch.

A recent article in my local paper highlights how this advertising shift is compounded by an array of new technologies. Retailers and manufacturers can now use technology to custom deliver advertising to consumers, even from a billboard. In this blog post I explore some of the technologies available and in development to help advertisers convert their message into sales.

Smart Billboards

Traditional billboards are static, giant advertisements that reach every driver in a shotgun approach. It is a one size fits all model and while they are potentially reaching thousands of drivers every day, depending on their location, the sales conversion rate is fairly low. The next step was to create digital billboards which can shuffle through several ads in hopes of appealing to a range of drivers. There is one on an interstate near me that is very bright and annoying, especially at night. This, like the static billboard, is random in that they are targeting a very broad demographic that may be on the highway at a particular time of day.

Smart billboards are an attempt to remove the randomness. Synaps Labs has created the first smart billboard in Moscow and will bring their technology to the U.S. sometime this year. This billboard is a combination of connected cameras and machine learning. Cameras are set up ahead of the billboard and when a particular model of car is detected, the billboard will display an advertisement targeted at that driver. The billboard in Moscow had ads for Jaguar cars. The advertisers decided that particular brands of Volvos and BMWs housed drivers that may be enticed to switch to Jaguar. Advertisers are still making demographic assumptions based on a car model but they are narrowing their target audience. The picture also changes depending on whether it is night or day or summer or winter. An advertiser could play with many variables at once. Going beyond the billboard, they could also push the same ad to the driver’s cellphone as an extension or reiteration of the message.

The Future of Billboards

Advertisers are looking forward to a world of autonomous vehicles where drivers/riders have the freedom to look around instead of concentrating on the road. In this future, a consumer can follow up on the impulse to purchase the advertised item while still in the car. Better yet, with a corresponding push to the smartphone, that purchase could be only one click away. While this is intriguing to advertisers, they are asking a fundamental question about consumer behavior: when riders are free to do and look at anything, will they actually be concentrating on billboards or will they be buried in their smartphone or on-board entertainment system?

Thoughts

With modern technologies there are many possible outcomes and it will take a lot of trial and error until we understand how people will behave. Do you think targeted ads on billboards would sway you? Does your car really represent your demographic, or is that grasping at straws? What is the future of advertising in the digital world? Do you think that we are becoming more discerning consumers? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.