Computer Vision and Deep Learning Apps Are Taking Center Stage — Quite Literally

When you’re a superstar like Taylor Swift, security can be a major concern. Now there are new tools that can help celebrities stay safe: AI combined with image detection. A recent article in The Verge reported that at a May 2018 Taylor Swift concert, her security team employed facial recognition to identify potential stalkers. Images from a facial recognition camera were cross-referenced with a database of hundreds of the pop star’s known stalkers to see if there was a match. When you think of deep learning and computer vision, this type of use case might not be the first thing that comes to mind, but it just goes to show that its possibilities are endless.

Computer vision captures, processes and analyzes real world images and videos to provide meaningful information, and we’re just beginning to harness its real capabilities. According to research firm, Tractica it’s expected to soar from an estimated $1.1 billion in 2016 to $26.2 billion by 2025.

Deep learning is giving a major boost to the capabilities of computer vision and sparking renewed interest in its capabilities. As a form of AI that enables algorithms to learn by example, deep learning uses learning data representations, as opposed to task-specific algorithms to derive deeper and more independent insights than other forms of machine learning.

Consider the following examples: of deep learning-enabled computer vision in action:

  • Ensuring safety in autonomous cars. It’s being applied in autonomous driving to navigate roads and make quick decisions in real time, such as identifying an oncoming vehicle or slowing down on icy pavement.
  • Designing better ads. Companies, such as Gannett, are turning to deep learning and computer vision to design better online ads, determining which colors, images and fonts work best. The company says that this has boosted click-through rates across different news sites.
  • Improving patient outcomes. It can help physicians diagnose diseases, among other applications. For example, a physician or radiologist can use it to review brain scans to determine healthy or not so healthy areas of the brain.
  • Improving urban planning. Computer vision and deep learning solutions can detect the number of buses and cars on busy highways and side roads to more effectively manage traffic.
  • Gauging emotions. In areas like education or retail, it can be used to determine the emotions of consumers or students and their reactions to the classroom or in-store experience. In education in particular, surveillance cameras can determine if classroom instruction is interesting by how engaged students appear to be.

Sending Computer Vision and Deep Learning to the Moon

When computer vision and deep learning solution are deployed in satellites, the possibilities are extended even further. Satellite imagery gives us an elevated look at massive amounts of images for applications such as:

  • Tackling deforestation. Computer vision and deep learning can help detect the number or specific species of trees in certain forests and parks to determine their growth or risk, and if deforestation is occurring, it can help to address the specific factors that could be causing it.
  • Tracking economic growth. By monitoring the numbers of cars, electric lights in the night sky, or construction activities, we can track the development and economic growth of countries around the world.
  • Responding to world crises. In situations such as a refugee crisis or war, it can help provide valuable information that can be used to plan for the supply of life-sustaining resources like food and shelter materials.

How Do These Applications Become Reality? It’s Not that Simple.

While there are clear benefits to the use of deep learning-based computer vision solutions the question of the hour is, how do companies get there? The democratization of this type of AI, through pre-packaged apps such as SalesForce.com’s Einstein, or Google AI tout the availability of “AI for everyone,” yet, when everyone has access to the same benefits, no one can really stand apart.

The democratization of AI has made it easy to create plug-and-play models, but it is still hard to create good models. The plethora of code and tutorials make it possible for a basic programmer to pretty easily build a basic model, but there’s a huge gap between a basic AI solution to answer rudimentary questions and one with the deep understanding of a specific business. When it comes to deep learning, skill is less important than experience in knowing which parameters to choose for each dataset and business problem.

While starting with ready-made deep learning solutions might be a good way to get your feet wet, companies serious about leveraging deep learning-enabled computer vision for competitive advantage eventually need to step away from the quick fix and develop custom applications from scratch. Custom solutions provide not only the deep dive analysis that comes from understanding your unique business needs and specific customer behaviors, but it uses these data-and image-driven insights to solve your business challenges and drive your specific growth path. Ready-made solutions simply do not have the robustness and experience required to do the job.

The problem is that true deep learning-enabled computer vision requires a very specific and highly honed expertise that can be hard to find. Data scientists spend years understanding not only the tech involved, but specific industry customer behaviors and challenges. They learn to understand what resonates in certain markets and they know how to transfer their understanding to machines. In addition, even with out-of-the-box solutions, really beneficial AI requires constant care and feeding. Deep learning solutions need to be constantly fed new data, images, video and other content to be accurate and up-to-date. Very few organizations have the resources to accomplish this in-house. So, while it may be okay to dabble with some of the out-of-the-box AI solutions, customer-developed solutions built from scratch are unmatched in providing true business advantage.

Powerful deep learning insights derived from computer vision technologies are enabling a whole new level of awareness, understanding and insights, improving lives, making people safer, cities more efficient and health diagnoses more accurate. These benefits require a deep commitment among the organizations that deploy them and a trust in what they can really accomplish. As Taylor Swift asks in the title of one of her top songs, “Are you Ready for It?”

Are you prepared for AI in 2019? Key Trends Driving its Growth

AI is continuing to make great strides. Industries as diverse as telecommunications, insurance, financial services and education are embracing new applications – and this growth is only expected to accelerate in 2019 despite challenges such as the shortage of data scientists. AI will not only continue to make inroads across sectors but it will become a pervasive technology because it enables companies to make better decisions and handle processes more effectively and efficiently. The companies that adopt the technology sooner rather than later, will be in the best position to leverage AI for competitive advantage in the coming year and beyond.

With all the benefits that AI offers, it’s no wonder that it’s taking off. From machine learning, which automates routine tasks; to chatbots, which simulate human communication; deep learning, which is fast resembling the thinking of humans; and predictive analytics, which uses historical data to predict future outcomes – the benefits of AI are very attractive.

AI is already having a major impact on the way business is being conducted today. One implication is that BI, as we know it, will soon disappear. The term, Business Intelligence, which has been around since 1958 (long before it could be traced as a search term), will no longer be a viable approach. In 2019 the term will morph instead into Business Insights, marked by a reduced focus on dashboards and reports, and instead an increased focus on outcome-driven, value-based analytics. This will be driven by the emphasis on data, as well as the ability of AI-enabled apps to capture as well as predict outcomes based on this data.

While there are two main thrusts in AI today: pragmatic AI, which has immediate applications today, and pure or open AI, which is focused on simulating humans, in the coming year and foreseeable future, the focus will be on pragmatic AI. Companies are using these smart apps to solve actual business and real-world problems, ranging from automatically detecting parasites in blood smears, helping security personnel at airports improve safety by avoiding false alarms, and enabling financial services firms to provide personalized customer advice through chatbots to increase loyalty and satisfaction.

Challenges to growth

Even in a year of growth, there are several challenges to AI adoption in the year ahead:

  • Data is needed to drive AI adoption. In order to automate processes, predict outcomes or communicate with humans, AI requires a huge quantity of data. The more data you can provide, the better – and the more accurate the outcome will be. Also, the data needs to be in good shape for the predictions and results to be valid. With data trapped in different silos, many organizations don’t know what data they have, where it is, or how to clean it up. Getting their data houses in order will be a first step to AI adoption for many companies in the new year.
  • The shortage of data scientists will not abate. According to LinkedIn’s Workforce Report for August 2018, there is a shortage of 151,717 people with data science skills, and IBM estimates that by 2020, the number of jobs for U.S. data professionals will increase to around 2.7 million. While there are just not enough qualified data scientists to go around, one thing is certain: the solution does not lie in using citizen app developers to create algorithms with pre-defined, as-a-service apps. Creating algorithms, teaching AI programs – as well as providing the constant refinement and training that is needed – is a complex, specialized process that requires the skills of qualified data scientists.
  • Heavy-duty computing power is required. The number of simultaneous calculations needed to create algorithms, as well as the cost and complexity of the advanced GPU servers needed to process them, makes it difficult for most companies to develop AI programs in-house. However, companies will increasingly turn to service providers to provide the specialized servers and skills needed to jumpstart their AI initiatives.

Growth areas ahead

Despite the road bumps that these challenges present, AI will be moving full speed ahead. Here are two new areas of growth we anticipate in 2019

  • Data will move to the edge. The challenge of having good, sound data to fuel AI apps will give rise to a new approach to how data is captured, stored, curated and delivered. The focus will be on ensuring data is clean at the edge –where it enters (not in the back-end system). To turn edge data into insight for real-time action, it must be processed close to its source to avoid the latency, bandwidth and cost issues of sending data to a cloud-based data center. There will be huge market opportunities for companies that build tools to help enter and validate data at the edge.
  • 2019 will be the year of computer vision. Another area of growth will be in virtual reality (VR), augmented reality, (AR) image detection and facial recognition. The use of AI to find patterns and insights from images and video will become as popular as data analytics. AI will involve not only image detection, but also movement and activity, enabling organizations to predict certain behaviors, for example, if a fight is about to erupt in a crowded group of people.

 

AI has established strong footholds across industries and applications and proven its value in improving the customer experience, offloading repetitive manual processes, and predicting future outcomes, among other activities. In 2019, organizations will continue to make progress in overcoming challenges and embrace AI at a faster clip as they reach for the brass ring of business insight and agility.

One of The Best Entrepreneurial Companies in America – Yep, That’s Wovenware

Entrepreneur Magazine just published its Entrepreneur360™ ranking of the best entrepreneurial companies in America. We’re quite honored and pleased to rank number 88 out of the 360 companies included – that’s the top quarter of the list.

Entrepreneur magazine started the E360 awards to identify 360 small businesses each year that are mastering the art and science of growing a business. We, along with other firms, were evaluated based on five metrics: impact, innovation, growth, leadership and business valuation.

According to Jason Feifer, editor in chief of Entrepreneur magazine, “These companies are deemed successful not only by revenue numbers, but by how well-rounded they are. The companies that make the list have pushed boundaries with their innovative ideas, fostered strong company cultures, impacted their communities for the better, and increased their brand awareness.”

It’s thrilling to be included on the A list and ranked according to an unbiased, scientific process from experts in the field, and we’ve worked hard to earn this recognition.

We’re very proud of our accomplishments over the past year, bringing advanced AI solutions to an ever-widening base of customers and helping them to solve real world business challenges. We’ve expanded the company, bringing on new staff, new business partners and branching into new areas of innovation. But maybe what we’re most proud of is how we’ve built a resilient company that remains committed to giving back to our local community, as well as the larger U.S. technology community. We’ve worked hard to help rebuild parts of Puerto Rico still ravaged by the hurricanes; and we continue to foster the next generation of software engineers and data scientists, sharing our expertise and excitement about all that AI has to offer.

We look forward to continuing to reach new standards of success in 2019. I have to say, this inclusion among the top entrepreneurs helps give us a proud send-off to 2018, and provides icing on the cake for a year well-served.

Will Privacy Regulations Stifle AI Innovation?

AI relies on quality data – and the more of it, the better. But that can be easier said than done now that recent privacy regulations like General Data Protection Regulation (GDPR) are requiring organizations to mask personal data. GDPR is a European regulation developed to protect privacy and make data collection and management more transparent and secure.

While GDPR is a European regulation, the U.S. may well be following suit. Already, several U.S. states have recently introduced legislation to expand data privacy rules and more states are expected to join over the coming months.

During a European Union privacy conference, Apple’s Tim Cooke just recently issued a call to action for U.S.-wide data-protection regulation, saying individuals’ personal information has been “weaponized.” According to the same article, U.S. Senator Mark Warner said he was encouraged by Microsoft, Apple and others’ support of regulation. He said, “Too often we’ve heard companies whose business models depend on intrusive and opaque collection of user data claim that any changes to the status quo will radically undermine American innovation, but Apple and others demonstrate that innovation doesn’t have to be a race to the bottom when it comes to data protection and user rights.”

But how will GDPR really impact AI? What happens, for example, when specific information needs to be collected on individuals to predict customer behavior, such as who might be likely to purchase a product or upgrade technology? Besides allowing people to request that companies remove their data, GDPR also requires companies to anonymize their data, unless identifying information is crucial to its worthiness.

This is especially true when it comes to AI in healthcare. According to an article, the GDPR introduced a right to explanation, which means that the logic of an automated decision can be challenged and the results tested – so businesses will need to think carefully before building an AI solution that cannot explain itself. Where required by GDPR, privacy impact assessments will be needed and privacy will take on even more urgency.

To conform to data privacy needs, professionals in all industries working with big data need to take out identifying details before processing the information. Similarly, the businesses using them should ensure that there is training or verify their workers’ knowledge in handling big data to avoid ethical violations and significant fines.

Maybe It’s not all bad news for AI innovation

There is a lot of speculation out there about what privacy regulations will mean for AI, but ironically, the regulations that some feel are stifling innovation may actually be forcing businesses to get their data houses in order. Consider that one of the main barriers to AI is data collection and many companies don’t know what type of data they have, let alone where it is or in what shape it is in. It’s common for different departments or business units to have their own data silos, leaving everyone to guess about what information is available in their organization.

Now, GDPR and other regulations are forcing companies to take a long hard look at their data and get their houses in order. They have to think critically about their data, what type they are storing, and what rules they are implementing to protect it. They need to find out where the data is, what it contains, how it is being used and ensure that the quality is good. Accomplishing all of these things and unlocking the data that has been stuck inside an organization is critical for any AI effort.

There’s always another way

The market is finding a way to unlock all of this hidden data – and to do so securely. Technology like synthetic data generation lets companies access personal information without identifying any individuals. Apple uses differential privacy to gather information on a group of users without using individual information, and Google offers a data loss prevention technology that strips personal information from databases.

Vendors are continuing to provide incentives for individuals to share their data. While that’s nothing new – it’s been around since loyalty cards were introduced – they are finding new and more attractive ways to encourage customers to opt in with their information.

Retail and e-commerce industries, in particular, will have a hard time adjusting to new privacy regulations because it is not anything they have ever had to do before. They can learn from industries like healthcare and financial services that have had to grapple with these data privacy issues for a while and figured out how to manage data privacy effectively and still mine the data they need. For example, healthcare organizations know how to share information in files while masking personal data.

How personal do you want to get?

When it comes down to it, how often do you really need to know about a specific customer? To determine what products to offer and the order in which to display website content, for example, companies need to know about patterns of behavior, and in some cases where individuals are located, but very specific and personalized data may not necessarily be needed. After all, data is needed to train algorithms, not to expose specific individuals. While AI innovation is being fueled to new heights thanks to solid, quality data, individual privacy need not be sacrificed in the process.

Why It Pays to Outsource Software Development Close to Home

While the boundaries to global commerce are clearly eroding, and no longer do companies operate only within the confines of their own countries, the fact of the matter remains that all is not as seamless as it may seem.

Changing global regulations about data sharing, complex AI-based technologies, a shortage of skilled data scientists and the need for greater collaboration between companies and their service providers are driving a renewed focus on U.S.-based nearshoring as the optimum way to reach business goals.

Companies are turning to U.S.-based nearshore providers because despite the growth in traditional offshoring, there have been bumps in the road. In some areas where the cost of labor is cheap, the workforce might be less educated and skilled, and they may rely on outdated frameworks and technologies, resulting in lower quality standards. And, with the rapid pace of business, distance can be a problem.

So how exactly are new market factors driving a resurgence in U.S.-based nearshoring?

Data privacy and other regulatory controls.

The growth of data-driven businesses and the ability to collect data on virtually anyone, has resulted in stricter data privacy regulations everywhere and the U.S. is no exception. For example, the U.S. telecommunications industry has rules governing the ability to share data overseas, and that includes sending data to software development providers. Since most of the AI applications that are created today, especially machine learning and analytics tools, require tons of training data it can be a major problem when the data can’t be easily sent to the provider without massive cleansing, obfuscation and encryption. Nearshoring to U.S. providers can eliminate this problem, enabling seamless transfer of vital data that fuels AI-based applications.

Complex AI-based technologies.

Given the rapid pace of technology advances, companies are under pressure to continually enhance software to improve business value, provide market differentiation and competitive advantage. Yet, emerging solutions that incorporate machine learning, chatbots and other smart apps are complicated to build and maintain, requiring the specialized expertise of software engineers and data scientists – who must remain involved for long-term training of AI algorithms. Because of the expertise that’s required for these cognitive solutions, as well as the need to truly understand specific business needs, working with nearshore providers that can relate to the local challenges and help companies derive valuable insights from smart solutions is becoming more crucial than ever.

A shortage of data scientists.

LinkedIn calculates that, in August 2018, employers were seeking 151,717 more data scientists than exist in the U.S. And, according to an article on The Quant Crunch report, demand is expected to rise 28% by 2020. These statistics indicate a major talent shortage when it comes to filling data science positions. Yet even without a talent shortage, many companies don’t find it cost-effective to hire internal data scientists to constantly maintain and grow AI solutions. Both of these factors are creating new demand for nearshore providers that have both the skill-set and understanding of local business needs to drive continued evolution of AI apps.

The need for collaboration.

The development of complex AI-based apps often requires close communication between the customer and offshore teams. While a major reason for the failure of software projects is a breakdown in communication, it’s hard to have real-time communication with an outsourcing partner when they are located several time zones away. Companies that conduct nearshoring from the U.S. typically experience less integration, cultural differences and other risks than European and APAC companies that more frequently outsource to neighboring countries with significantly different languages, currencies and regulatory requirements.

 

Nearshoring is clearly on the rise when it comes to today’s business needs for advanced software development. But to help ensure that they have a positive engagement, companies should consider the following in their software development outsourcing strategy:

  • The education and skill of the workforce – with emphasis on quality commitments
  • Specialized expertise and resources – especially when it comes to machine learning, deep learning and other specialties
  • Language and cultural barriers
  • Alignment between your approaches
  • Proximity and the need for ongoing communication
  • Regulatory issues. For example, in the U.S. government mandates require that work for many aerospace and defense contractors and healthcare providers is conducted by U.S. citizens, which would include U.S. territories such as Puerto Rico and the U.S. Virgin Islands.

With today’s fast pace of technology innovation, driven by AI solutions that need to think like the people they are supporting, companies know they can’t do it alone and are realizing the benefits of outsourcing in its many forms. Yet, we can expect to see the nearshoring segment of the market take off at lightning speed as companies begin to more fully grasp the added value of having a software partner closer to home, abiding by the same regulations and business practices and ultimately becoming a real extension of the business, as well as contributor to its overall success.

Chatbots Get an A+ for Increasing Enrollment, Student Satisfaction in Higher Ed

Given declining enrollment in higher ed institutions nationwide, universities are under increasing pressure to attract students and keep them happy. Customer service has become more important than ever, but with limited budgets and staff, many universities have fallen behind and need to up their game.

Let’s face it, millennials, who grew up with the Internet and self-service applications, are used to getting information how and when they want it, 24/7. They want to be able to easily find answers to their questions on academic requirements, financial aid and other areas at the tap of a finger on their mobile devices.

Universities have tried to staff call centers as best they can to address these questions, often using students whose first priority, of course, is their academic work. It’s just simply not feasible –not to mention exorbitantly expensive — to staff these call centers 24/7, so throwing more people at the problem isn’t the solution. And often, students don’t want to talk anyway. They want to use their mobile devices to get answers to their questions.

Fortunately, chatbots are an ideal way to address the problem. Chatbots, which simulate human communication either via voice or text, provide a direct user experience without any intermediaries. They can be programmed to answer all types of questions that students may have, providing immediate information to address their needs – through text and SMS messages, chat discussions on the website or by phone.

Making the best first impression for recruitment

Each year, colleges receive a flurry of inquiries from potential students about all aspects of college and academic life, financial aid and scholarships. To create the best first impression with students and a great user experience, it’s critical for schools to answer all of these questions quickly and accurately.

How the students are treated – and whether they feel well serviced or neglected – can make or break their overall feeling about a university and whether they want to apply or reject the school. A chatbot on a university website can be programmed to talk about what makes the school so unique and why it would be a great place to go, enabling students to click on a topic to learn more.

Once they decide to apply, students typically have a lot of questions. Chatbots can help guide them through the admissions process, which often involves an extensive application with lots of supporting materials. Similarly, they can get help with the many questions they typically have when applying for financial aid.

But even after students apply, are accepted and decide to attend, the process is still not over. Universities lose some of the incoming class to “summer melt,” when accepted students change their minds and don’t end up attending the school. Georgia State University decided to develop a chatbot that could reach out to students who had enrolled via text messages with reminders and key information during the summer before they attend. It also answered questions about the dorm, financial aid, tuition, etc. The school found that the chatbot reduced the summer melt by over 21 percent compared to students in a control group.

Continuing a high-level user experience throughout college

Helping students during the application process isn’t enough. It’s important to maintain a high level of customer service throughout the student’s college career to ensure a positive experience at the school. Studies have shown that when students feel unsupported by the institution it can be a factor in causing them to drop out. Chatbots can be used in in all areas – answering questions about university services such as health services, and athletic services, clubs, student accounts and other aspects of student life.

To answer the growing number of student questions, the Inter-American University of Puerto Rico developed a chatbot that would communicate with students where they predominately spent their time — on the website, and via Facebook Messenger and SMS mobile. The university found that it was able to automate responses for 80 percent of the calls, while reducing by half the support staff needed for this task. Because their time was freed up from having to handle all the calls, the staff could focus on answering more complex questions and proactively help students meet financial aid deadlines, select courses course selection and get any academic help they might need.

Chatbots can also be used for technical support, answering some of the routine questions, so the support team can be more available to focus on the more difficult issues.

Enabling consistency and accuracy

Through years of experience fielding questions from prospective and incoming students, universities can easily develop a list of commonly asked questions, and prepare answers that are clear, accurate and consistent. Chatbots can be programmed to answer these routine questions quickly, while routing more complex questions to human staffers. They also provide a high degree of quality control. Since you program in all of the answers, you can ensure that they are consistent and accurate, something that is harder to do with humans, as well as with websites, that require constant updating.

Every industry is interested in data today and higher education is no exception. Since chatbots answer lots of student queries, they provide a hotbed of documented data and insights into student behavior, student concerns, and areas where the university is strong and where it may fall short. By providing a sort of transactional record of each transaction, chatbots enable universities to use patterns of data to improve their operations. For example, are many student asking questions that should be standard information on the website? Perhaps the website needs updating, with clearer information. Are many students voicing concerns over library closing times? Maybe the hours of operation need changing.

Chatbots provide an easy, cost-effective solution to help universities overcome some of their most pressing challenges today, like declining enrollment and summer melt. It’s quickly becoming apparent that with the help of chatbots, the schools that can best address student needs quickly, effectively and accurately, will win over their hearts and minds – and be well positioned to succeed in today’s competitive marketplace.