Tag Archives: data

Data Nationalization: Drawing Borders in the Cloud

Photo of clouds forming map of the world.Moscow, Russia city government last week announced that they will move 6,000 government computers off of Microsoft Outlook to a Russian-produced application called MyOffice Mail. If successful, they will move 600,000 more systems next year. Cost savings is cited as one reason for the migration but nationalism is also a big factor. In an interview, Communications Minister Nikolay Nikiforov told reporters “We want the money of taxpayers and state-run firms to be primarily spent on local software.” The Russian prime minister has called for a migration away from foreign software out of security concerns over tensions with the west. Russia is not the only nation and Moscow is not the only city to move in this direction.

The internet was meant to be global but from recent announcements and actions it appears we are drawing borders in the cloud. This post is an update to a 2014 post highlighting the beginning of this movement. From recent developments it appears the trend is accelerating.

LiMux—The IT Revolution

Munich also moved to a proprietary platform in October 2013 when they finished the rollout of LiMux, a version of Ubuntu Linux. The almost decade long migration off of older Microsoft systems and applications was marked by the rallying cry “The IT Revolution.” That migration was about cost containment and control. They felt that they could not regulate the pace of required operating system and application updates. The jury is still out on whether this move delivered the intended benefits for the city or whether it has created a bigger headache for the technology department as they deal with compatibility issues. This is an example of reigning in control of technology and storage as traditional vendors move to cloud based systems such as Office 365.

 Legal Boundaries

Russia’s data nationalization law requires all personal data about citizens be stored and processed on servers inside Russia. The routing of such data is a point not completely worked out yet. That may be much harder to keep within the borders. Australia has a similar law specifically covering electronic health records of citizens and their storage and transport.

In a 2015 paper published in the Emory Law Journal, the authors highlight a number of countries that implemented regulations to restrict the storage and movement of data inside and outside of borders. Some of these were a reaction to the 2013 NSA surveillance revelations concerning data collection on countries and heads of state. Countries are moving to protect their citizens by regulating at least their portion of the cloud. This will most likely escalate and present difficulties for internet companies large and small.

Thoughts

My objective in this post is to speculate on the future of the cloud. We already have a private cloud and public cloud and now a hybrid cloud. Will these be followed by a Russian cloud, and a Chinese cloud and a U.S. cloud? Will that hamper the open nature of the internet or will it simply serve to provide information security for each nation, state, or municipality just as physical borders provide personal safety? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Customer Data: The New Capital

Fingerprint weighted against a dollar sign.Sports Authority, a retail chain of sporting goods stores, recently filed for bankruptcy and sold off all of their assets. One of the highest bids was for their name, e-commerce site and customer data, bought by rival Dick’s Sporting Goods for $15 million. In contrast, a package of several store leases went for only $8 million and naming rights to Sports Authority Field, also known as Mile High Stadium, home of the Denver Broncos, is still on the auction block. It appears that customer information is the new desired capital, but what does that say about our privacy and the use of our personal information? Is it truly for sale to the highest bidder? Did we actually agree to that?

Privacy Policies

The Sports Authority privacy policy states, “We may transfer your personal information in the event of a corporate sale, merger, acquisition, dissolution or similar event.” Information collected and stored at the Sports Authority website includes full name, street address, e-mail address, telephone number, credit card number, and credit card expiration date. This is not unique to Sports Authority; other online retailers collect the same information and include a similar caveat in their privacy policies. It is up to the consumer to read and understand that clause and decide whether it is worth the risk.

Relationships

When signing up for rewards programs I agree to hand over my personal information, regardless of whether I read the privacy policy or not, but I expect our relationship to end if the company is dissolved. In the case of Sports Authority, my intended relationship was with them and not with Dick’s Sporting Goods or someone else. Is there a step in the process that lets me break off the deal should I not want to be solicited by the highest bidder?

Thoughts

With value on customer data comes responsibility to customers who have disclosed their information and expect at least a minimum of privacy and discretion. Privacy advocates are watching these developments closely. They are concerned that the new owners will not adhere to the original privacy agreement and will use the customer information in ways not originally agreed upon.

Let me know your thoughts on buying and selling customer information. It is not a new idea. I have received solicitations from car dealers for years based on information available from the division of motor vehicles. What is new is how easy it is to collect, buy, and sell this information and the amount of associated customer information collected, which can be put up for sale to the highest bidder.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Advances in Wellness: Improving the Quantified Self

Conceptual image of a running man and his heart rate.New medical products and apps introduced at the recent Consumer Electronic Show Digital Health Summit (CES) present a lot of promise for keeping us healthy and safe. Often overshadowed by tech gadgets such as new cars and new artificial intelligence products, it’s time to give the medical developments their day in the sun.

Pathway Genomics OME

The Pathway OME app is powered by IBM Watson and is a comprehensive app that collects personal health information from a variety of sources including electronic health records, data from health monitoring devices, and even a DNA sample. From this data, the app will give you advice or alert you to potential health issues. Through the IBM Watson’s data intelligence power you can receive personalized information on potential interactions with food and drugs or receive a custom diet and exercise regimen.

Wisewear

Who says that health and safety monitoring can’t be fashionable? Wisewear makes a fashion bracelet that monitors vital statistics and acts as an emergency beacon. When you think you are in danger or need help, tap the stylish bracelet three times and it will connect with your phone to send out text alerts, including your location, to friends. This is a great marriage of form and function.

Quell

Neurometrix makes a pain management device that is drug free and monitors and counteracts chronic pain 24/7. This is an electrode that is worn on the upper calf just below the knee and delivers a signal that blocks pain neurotransmitters throughout the body. It counteracts pain from arthritis and other musculoskeletal issues and allows the wearer to enjoy work and activities. It synchs with a smartphone app to deliver a profile of your pain management. It is adjustable and easily rechargeable. My father used to connect leads to a voltage generator to help ease his arthritis pain. I realize now that he was just ahead of his time, although maybe his system was not quite as elegant.

Mimo

The Mimo Smart Baby Monitor uses very low voltage sensors built into a baby sleeper to deliver information about breathing, movement, and sleep/wake patterns to a smartphone app. This, in theory, lowers stress for new parents and allows them to sleep better. The same information is also available to other smartphones if a parent has to be out of town but still wants to track their baby. Definitely a quantified life right from day one.

Resound

Enzo hearing aids from Resound combine advanced technology with a sophisticated smartphone app that lets you fine tune your hearing to different conditions. Whether you find yourself in a crowded noisy room or in a quiet place trying to hear a soft voice, the app lets you discreetly adjust your hearing aids. You can also couple them to your smartphone to listen to music or voice directly through your hearing aids.

Thoughts

The health technologies displayed at CES this year are designed to help us be active, healthy, and safe and provide the capability to monitor and assist those we love.

Did you see any extraordinary products at CES this year? Let me know what caught your eye.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Implementing Privacy Policy Across Borders

Image of a padlock surrounded by gold stars on a blue field.Digital privacy and security often go hand in hand and the two will continue to be center stage in terms of information management in 2016. As we continue to work through the freedoms and accessibility that come with our connected world, we need to take a broader view than just our community and country. How will digital policy in other parts of the world affect the way we conduct business and how we protect our digital identity? An article this week about emerging policy in the European Union (EU) helped me understand the implications for my own digital persona.

Secondary Use

The EU has developed privacy and data protection reforms that could be enacted within two years. According to the new legislation, a European citizen’s information cannot be used for a secondary purpose without their consent. For example, if I agree to reveal my current location to use Google Maps or to find the nearest Olive Garden, that piece of information cannot also be used to target me for a local gym membership advertisement. Anyone intending to sell personal data would need to know the potential buyers ahead of time and must get permission from all individuals whose data may be sold. Because it will be difficult to limit this to EU citizens it could become wide-ranging. This also has implications for anyone doing data mining and analytics to create and sell information or profiles.

Profiling

Personal profiling is also covered in this recently passed legislation. While not prohibited, it places the burden on the profiler to reveal the information collected and algorithms used to create the portrait. If I eat out every Tuesday night, shop for groceries every Thursday night, and have recently searched online for chef schools, someone could conclude that I am tired of restaurant food and could target me with an ad for a local kitchen store. Before that happens however, I have the right to know just how that data mined profile is created, according to the new legislation. While this helps me as a consumer, as an IT professional I have to be careful conducting any data mining or analytics and now have to be transparent in my work and intent.

In The Cloud

While I applaud the EU for its sweeping reforms I think they will be difficult to enact and enforce. Here is the dilemma for me: how do I reconcile geographical boundaries with cloud boundaries, which by definition are ethereal? For example, as an EU citizen, the data collected about me could be housed on cloud servers in Frankfurt or Mumbai or Buenos Aires or Atlanta. Do the laws refer to me as a citizen living within the European geographical boundaries? Or do they refer to the location of my data? What if I am a German resident but my data is housed and mined outside of the EU? What then?

Thoughts

The European legislation is still at least two years away from being enacted. In that time we need to broaden our thinking beyond government boundaries and create worldwide policies regarding security and privacy. It would be difficult to specifically mark all data belonging to citizens of a particular country, but it would be easier to apply the same standard for users worldwide. It will take a concerted effort to think beyond controlled boundaries and work together to consider what is best for all digital citizens. Do you think we will ever be able to agree on global digital policies? Let me know your thoughts.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

High Tech Fire Watch

Photograph of smoke from wildfire in the mountains.We are in the middle of fire season here in the Northwest. This has been a hot, dry summer so the threat of wildfire is great. Several of my friends have worked on fire crews at some point so I wondered about the role technology plays in fighting wildfires. I was delighted to find that someone had blazed that trail before me and technology plays a role not only in fire fighting but also in fire protection. In this blog post I will focus on technology in fire protection. I will dedicate an upcoming post to technology in fire fighting.

Eye In The Sky

I was amazed to find that many of the rustic fire towers perched on mountaintops in California, Oregon, and Washington are decommissioned. In a recent article in Outside magazine the authors report that fewer than 35% of the towers are still manned. Due to budget cuts, fire watchers have largely been replaced by a network of cameras. According to the article, a camera can spot a fire up to 100 miles away and can spot fires at night through near infrared vision.

ForestWatch

Oregon has a network of cameras called ForestWatch by Envirovision Solutions. These cameras are networked to provide coverage over the most fire prone areas of the state. They are all monitored remotely and can detect a change in the terrain from a digital model. Through mathematical algorithms, the cameras send an alarm when it detects anomalies or pattern differences such as fire or smoke. The remote monitoring station can then focus the camera or cameras on the suspicious area and collect GPS coordinates in case they need to send in a ground or air crew. Fires are spotted quicker and their specific location is known much faster, which may reduce the spread and damage of a fire.

Education

This is a great use of technology but what kind of education does it take to install, program, and monitor these cameras? My research shows knowledge in the following areas is required:

GIS—A strong background in geographical information systems (GIS). This includes mapping and data analysis.

Data modeling—A strong background in data modeling and database management. There are many data points involved here, from GPS coordinates to topographical data to wind speed to moisture index, and they all need to be combined and modeled to show the monitor what fire crews will encounter.

Wireless networking—These cameras are networked to the central monitoring station and often to each other. In a suspected fire, multiple cameras from various angles can verify the validity of the alarm. A person would need a strong background in wireless networking to establish and maintain these cameras.

Thoughts

Fire watch cameras are a good use of technology and a reminder that new jobs often require a strong education in math and science as well as specific technical skills. As the technology moves from human fire watchers to sophisticated data collecting cameras, we must continue updating our education to be prepared for these jobs of the 21st century.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Spring Cleaning That Dirty Data

Photo of pulling a squeegee across a soapy window on a sunny day.I am in spring cleaning mode this week and plenty of projects around the house need attention. Now that the sun is out, I can see how dirty my windows really are. In addition to physical cleaning, I am also trying to clean up my files and data and I would encourage you to do the same. As January is for resolutions, the arrival of spring is a good tickler for cleaning.

Big Data

There is a lot of talk about big data and the potential for new insights through careful analysis. What we don’t talk about enough is the fact that these brilliant insights will not be possible unless we organize and cleanse the data that we have. The biggest problems are missing data, inaccurate data, and redundant data. Until we clean up these problems the results of our analyses will continue to be flawed.

If you work with customer records, medical records, financial records or other critical data, you should be scrubbing constantly. For the rest of us, we should provide a good annual cleaning, at a minimum. It really all comes down to trust. Do I trust the results I am getting and do I trust the underlying data? If not, it is time to clean.

Missing Data

Information professionals say “garbage in, garbage out.” This is especially applicable to missing data. For example, a form prompts customers to supply their name, address, city, state, and zip code. If some customers fail to provide their zip code, you could never sort with accuracy on that field. If you wanted to send out advertising to a select geographic location based on zip code, you could not. Your data for this task is incomplete and useless. Maintaining strict rules on incoming data can alleviate this problem.

Inaccurate Data

Inaccurate data is even worse than missing data. With missing data, you can see where you have holes even if you cannot sort on that information. With inaccurate data, you could be happily marching down the yellow brick road and not know how bad your results are. You may not even know the extent of the problem. The key to accurate data is to put filters in place so the data is analyzed for accuracy, correct values, and values in the correct field.

Redundant Data

Another problem is redundant data. This can come from poor version control or not replacing old values or information with newer values. As an example, think about your personal digital photo storage. How many times have you stored the same photo? If you are anything like me, you have a copy on your phone, your computer, possibly your tablet, and one or two memory cards. The good news is, if you ever had a device failure then you have plenty of backup sources, but the bad news is you have created redundant data or images. With the introduction of cloud computing, we should be able to synch everything to the cloud and have one clean filtered copy of everything. Unfortunately, there seems to be some lingering trust issues with the cloud, but hopefully we can get beyond that.

Thoughts

Big data can get out of control quickly without well thought out strategies for input, organization, and cleansing. This year, as part of your spring cleaning, identify those areas where you have dirty data and vow to get them under control before it controls you.

Do you have any advice for cleaning big data and keeping it clean? Are there any products that have worked well for you? Cleaning data is harder than cleaning windows but the results can be just as bright.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Data in the Dirt: Technology in Farming Redux

Agronomist working in a wheat field.Last year I blogged on unique uses of technology in farming. Spring is in the air here in the Pacific Northwest, so I want to revisit that thread and highlight a technology and company born right here at the University of Oregon. This company is researching the interaction between plants and fertilizers, particularly nitrogen. They have developed a technology and device that will allow a farmer or grower to monitor the nitrogen level of the soil, thus preventing excess fertilization and runoff.

SupraSensor Technologies

SupraSensor Technologies was formed in 2012 from the graduate work of Calden Carroll in partnership with his professors, Darren Johnson and Mike Haley. They discovered that the interaction between plant cells and their nitrogen level could be measured. Nitrogen fertilizer is water soluble and excess nitrogen runs off and mixes with the water table. In some areas of the country, there are large algal blooms that were fed by runoff. Algal blooms change water pH and oxygen levels, which harm fish and other organisms, and some species of algae are toxic, even deadly, to people and animals.

Field Nutrient Sensors

Carroll and other researchers did not stop at identifying this molecular interaction. They developed a device called a Field Nutrient Sensor™ (FNS™), which measures the nitrogen level in the soil, just below root level. This information is collected wirelessly so that a farmer can determine precisely where to fertilize and when to stop. It is estimated that 30 percent of all fertilizer runs off, so this device would reduce the use of chemicals, thus saving money for the farmer and promoting a sustainable and healthier farm. Collecting the data wirelessly is much less labor intensive and yields more accurate and timely data.

Farming Meets Information Technology

SupraSensor Technologies has test sensors in the field right now and is seeking funding for commercialization. It has secured seed funding from the National Science Foundation and through state and national grants. The ability to collect this important data means that farms will now need information technology experts to not only help with the data collection and wireless networking but also with data analytics to create a coherent picture of the health of the farm, the plants, and the soil. Information technology is emerging from the computer room and finding its way to the farm, the manufacturing floor, the research site, and wherever data is being turned into solutions for a better world.

Thoughts

There are many opportunities developing for IT professionals and it is an exciting time to be involved in tackling real world problems like healthy farms and sustainable ecosystems. Do you know of other technology and research breakthroughs that you would like me to highlight? Let me know if have you cool things that need to be shared.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

The Power of Data

My last blog post was on the power of information. This week I take a different twist and talk about the power of data. Some would argue they are the same thing, but I believe they are two sides of the same coin. I could write an entire blog post on the difference, but I will save that for another time. Two things prompted me to write about this topic: a TED Talk by Susan Etlinger about critical thinking when dealing with data, and my recent attendance at the ARMA International conference of records managers in San Diego.

Critical Thinking

In Susan Etlinger’s talk, she stresses the need to apply critical thinking to the ever-growing stream of data we face. Unfortunately, computers cannot yet generate the thinking and cognitive processing necessary to extract nuggets of information and wisdom from raw data. Computers can only apply patterns that we introduce to them; the real job of providing context and meaning to data still comes from us. Having the smartest person interpret facts and figures in a meaningful way and in a way that will yield innovative business approaches is what provides competitive advantages for a company. We are at a point where most businesses have access to the same computing capacity and the same data coming from the same cloud, but the differentiator is increasingly the thinking human being at the end of the process.

All That Data

I was fortunate to attend the ARMA conference in San Diego last week—a gathering of records managers and information professionals. As I listened to the presentations and met with professionals, I was struck by the incredible amount of data that they are tasked with managing. Some of that data is in the form of old paper records that are being converted to digital content and indexed so it can be mined and searched. Some records are already digital but are held in many different repositories and cannot be searched across platforms and databases. For these professionals, job one is to collect everything in one place. Job two is to create meaning and context by intelligent queries. The data and the facts are present, but they cannot be converted into innovative answers until someone asks the right question. I was impressed by the practitioners I met that work in fields such as medical care, law enforcement, higher education, and government. They truly understand the monumental task ahead of them but also understand that they can make a personal difference at the end of the day.

Thoughts

I just finished teaching a course in information systems and management for the AIM Program. Whenever I teach, I understand that I can either present just the facts or I can help build context and meaning around those facts. I want my students to wake up in the middle of the night with an idea that they developed by analyzing the facts but also by applying critical thinking and asking the hard questions. I want them to synthesize the data from many sources until they arrive at that “aha” moment that leads to a breakthrough. This is what great research is all about and this is what great learning is all about. If I can help inspire those new and exciting combinations of data and ideas, then I have truly been successful.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional, adjunct faculty for the University of Oregon, and academic director of the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

The Dark Side of the Deep Web

Digital vortexThere have been a number of stories and references to the “Deep Web” in the media over the last two months, including references in Season Two of the Netflix series “House of Cards.” With a renewed interest, I wanted to make sure that I was clear on the different terms associated with the Deep Web. My research prompted me to dig even deeper (pun intended).

The Surface Web

The surface web is the part of the web that is searched by sites such as Google, Yahoo, and Bing. It is estimated that this surface layer accounts for only 1–5 percent of the entire web, as illustrated in a recently posted infographic from CNN. This surface layer excludes database search results and all corporate and academic sites behind a firewall. Search engines build and search from an index, so if a site is not part of the publicly searchable index, then it is not included in this layer. It is also possible for a website to intentionally become unsearchable by using a particular metatag.

The Deep Web

The Deep Web is the layer that lies below the surface. Every time you query an online database, the site creates a new page. That new page, however, is not included in the surface layer index because the web crawlers cannot do the same thing. The web crawler can only build an index by visiting websites and searching their links as well as the links referencing those sites. Other examples of data in the Deep Web are academic journals that are either behind a “for fee” structure or protected by a firewall. All intranet data on corporate networks also resides in the Deep Web layer. Businesses such as Bright Planet provide services that assist you in navigating the Deep Web.

The Dark Web

The top two layers can be considered to house legitimate data and transactions; they simply represent information that can be searched and indexed by web crawlers (surface) and information that cannot be seen by automated searchers (deep). Within the Deep Web, however, is an isolated area called the dark web. This is the area where cyber tracks are erased and transactions for goods and services may or may not be legal or legitimate. You can access this part of the web through browsers such as TOR that can be downloaded and allows access to the TOR network. TOR is an acronym which stands for “The Onion Router.” If you think about an onion and its layers, TOR allows you to access the core of that onion. TOR operates by hiding originating addresses among a network of servers so the end user remains anonymous. This area may house legitimate anonymous transactions but it is also the home of drug and other illicit trading.

Thoughts

I think it is important to understand the different terms relating to the different layers of the web and to understand the purpose of each layer. Could you benefit from a service that dissects the larger Deep Web for big data not available in the surface web? It is possible and very useful to be knowledgeable about all available options so you can provide the best IT service to your customers.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional, adjunct faculty for the University of Oregon, and academic director of the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Is The Network Really Neutral?

shutterstock_47033419There has been a lot of noise lately about net neutrality in the United States, but I have been wondering: how about neutrality on the rest of our planet? We become focused on our problems, or potential problems, and often forget that we are not the only players in this game. The Internet is not used or owned exclusively by the US but also by the rest of the world, including China and Third World countries. How do they view net neutrality or are we making much ado about nothing?

Definition

This is the best definition that I have found for net neutrality:

“Simply put, net neutrality is a network design paradigm that argues for broadband network providers to be completely detached from what information is sent over their networks. In essence, it argues that no bit of information should be prioritized over another. This principle implies that an information network such as the Internet is most efficient and useful to the public when it is less focused on a particular audience and instead attentive to multiple users.”

Just as the telegraph network of the 1800s and the telephone and electrical networks of the 1900s were and are neutral, the argument is raised that the Internet should follow suit.

What Is Different in 2014?

The term “net neutrality” was coined over ten years ago and is based on the early operating principle of the Internet that the network would be open equally to all. In December 2010, the Federal Communications Commission (FCC) tried to codify that accepted policy by creating the “Open Internet Order”. The flaw was that they were using the same playbook developed to regulate telephone companies. Internet providers, however, are classified as “information carriers,” not “communication carriers”. Verizon challenged this order in 2011 and the courts finally threw out the Open Internet Order this month, based on the fact that the FCC did not have jurisdiction to create that order. Suddenly, the term net neutrality is back in vogue and back in tweets.

Is The Rest of the World Open?

I was curious as to whether the rest of the world enjoys open Internet, regulated Internet, or tiered Internet. Tiered Internet is the doomsday scenario when Internet service providers charge customers and content providers a premium for higher bandwidth applications. This is the fear of the absence of regulated open Internet. In researching this question I came across a lot of theories and conjectures at both ends of the spectrum, but not a lot of straight answers. Just as the United States is trying to get a handle on how free the Internet should be, other countries are asking similar questions. The International Telecommunications Union (ITU), which is an arm of the United Nations, held a conference in December 2012 in Dubai, United Arab Emirates. At that conference, there was an attempt to float an international telecommunications treaty, but unlike many smaller countries, the US, Canada, and the UK refused to sign the treaty. This was a failed attempt to give more regulatory power over the Internet to the United Nations through the ITU. The next conference will take place in October this year in Busan, South Korea; it is assumed that a similar vote will come up again.

My Thoughts

It is not only the United States that is struggling with how or whether to regulate the Internet, the same scene is being played out on the international stage. The European Union is talking about it, China is talking about it, and South American countries are talking about it as well. They all are struggling to understand how to protect themselves from corporate interests or even from their neighbors, while ensuring that the citizens continue to enjoy unfettered access. My take is that Internet 3.0 will require a sizable investment in infrastructure, and if we want to continue to enjoy increasing access and options, we have to be talking about where those funds will be coming from.

Do you have an opinion on the current net neutrality debate? Let me know.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional, adjunct faculty for the University of Oregon, and academic director of the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.