Gen 3.0 Analytics – How the Government Can Use the Data it Owns

The government is sitting on a treasure trove of HR data that it does not typically use. For example, agencies have data about performance, and data about where they recruit and what kinds of questions they ask in job announcements. I do not know of a single agency that is comparing the questions they ask to the performance they get from the selectees. There are so many possibilities to use the data to produce actionable information that would help agencies do better hiring, get better performance, and use their resources more wisely.

I usually write everything in my blog, but I believe analytics are crucial to any efforts to improve federal hiring, performance and virtually everything else. However, I am not an expert on analytics, so I did not feel qualified to write about specifics on how to do it. My colleague Michael (Whit) Whitaker is an expert on analytics, so I asked him to write a guest blog post on the use of analytics and what agencies should be looking for when they think about analytics expertise.

Jeff

Guest blog post by Michael Whitaker, PhD 

As mission leaders and human resources officers consider how best to meet the analytics needs of their organization to carry out their missions, it may be worth reconsidering the relative priority of hiring elusive and expensive data scientists.  The term “data scientist” and the frenzy of the hiring market around their skills is as strong as ever as organizations seek to hire individuals who combine strong technology skills with business savvy.  However, there are some interesting trends in the evolution of analytics that may lead organizations to reassess the pressing need to hire individual data scientists and instead focus on either cultivating domain experts who are analytics-savvy or on crafting teams that can effectively use modern analytics to address organizational challenges and drive results.  A Fortune magazine article acknowledges the hot market for data scientists but cautions that the combination of increasing data science training programs and analytics automation may soon begin to dampen demand.  Let’s look briefly at the evolution of analytics across three broad generations to illustrate the changing value of expertise, and what the government should be looking for today.

Analytics chartGen 1.0:  Top Experts Can Do It All

Gen 1.0 can be described as human-driven analytics.  These analytics are generally characterized by small data and batch processing.  The data and analytics inputs and outputs are intuitive for humans to understand and manipulate.  Many experts exist who combine both deep domain expertise and the ability to perform the analytics using standard tools such as Excel, statistical software, or geospatial packages.  Gen 1.0 solutions have been delivering value to organizations for many decades, leveraging a combination of skills from in-house experts and analysts combined with off the shelf tools.  Example solutions in this generation include:

  • Data management, extraction, transformation, loading, governance, and visualization
  • Descriptive analyses and advanced statistics
  • Business rules, alerts, and decision support tools
  • Geospatial analytics and cartography
  • Simulation, optimization, and forecasting
  • Customer and market segmentation

Gen 2.0: As Data and Technology Rapidly Evolve, the Need for Data Scientists Emerges

The rise in the demand for data scientists has coincided with Gen 2.0 that can be described as data-driven analytics.  Gen 2.0 can be defined by big data and machine learning with rapid technology advancement that is enabled by the cloud.  Example Gen 2.0 solutions include:

  • Big data acquisition, processing, and storage including data lakes
  • Natural language processing and semantics
  • Supervised and unsupervised machine learning
  • Predictive analytics

The scope and scale of analytics in this generation have begun to challenge what is readily understandable by human intuition.  Most importantly, domain and technology skills have started to diverge.  It is no longer relatively easy for a single individual to be an expert in a domain, identify a problem, and deliver the analytics needed to solve it in a ubiquitous tool like Excel.  With the rapid evolution of analytics technology, it is becoming increasingly difficult for individuals to maintain deep and relevant expertise in both a specific domain and in the tools and techniques required to deliver modern analytics solutions.  Therefore, Gen 2.0 has placed an increasing emphasis on the need for data scientists with skills to manipulate data and develop custom algorithms in a quest to unlock hidden value.   This quest for data scientists can be viewed in part as a desire to recombine the diverging domain and analytics skillsets back into individuals but this time with a greater emphasis on the analytics skills than domain expertise.

Gen 3.0: The Coming of the Age of Machine-driven Analytics

While Gen 2.0 has proven disruptive with rapidly evolving technologies and the promise of using big data analytics to extract untapped value from the deluge of data, the transition to Gen 3.0 is already beginning in many sectors.  Gen 3.0 can be described as machine-driven analytics that involve machine-to-machine communication and advanced artificial intelligence.   Examples include:

  • Analytics of the internet of things
  • Real-time prescriptive analytics
  • Advanced artificial intelligence (e.g. Watson and Deep Learning)

One benefit of the era of Gen 3.0 analytics will be that the analytics technologies and algorithms will be increasingly commoditized, requiring less custom development and bringing more widespread availability of advanced capabilities.  The use of advanced analytics will be more about connecting to and applying the right tool from a machine learning or artificial intelligence library or selecting the proper algorithm for the specific problem, rather than the need to originally develop and code new analytics techniques. Gen 3.0 should expand the ability of federal agencies and other organizations to use complex analytics and make far better use of the mountains of data the government collects and maintains.

Warning:  What Happens When the Algorithms are Wrong?

The challenge with Gen 3.0 is that while the analytics will become increasingly powerful and more widely available, they will also become harder to intuitively understand.  The technologies will take in user requests and deliver answers based on advanced artificial intelligence and machine learning, but the process of getting from the question to the answer will increasingly occur in a black box that obscures the stepwise logic from detailed inspection.  The risk in putting faith in the outputs of black box algorithms is the potential for unintended consequences when the algorithms are wrong.  In discussing the limits to artificial intelligence and machine learning technologies, Arati Prabhakar, Director of the Defense Advanced Research Projects Agency (DARPA) warns that when artificial intelligence and machine learning algorithms are wrong they can be wrong in ways that no human would ever be.

As DARPA regularly investigates technology that is decades ahead of what is broadly available in private or public applications, the warning should give all organizations substantial pause in their quest to hire teams of data scientists and unleash them on their datasets to extract value.  As we enter the era of Gen 3.0 analytics, there will be increasing requirements for domain experts who are fluent in the capabilities, limitations, and appropriate application of emerging technologies.  The experts will be asked to select the right analytics approach for the specific mission challenge, help train the analytics algorithms to be properly applied for that challenge, and validate that the analytics outputs are reasonable.  They will also need to assess other, non-data constraints faced in recommending actions to achieve desired outcomes including politics, physical infrastructure, social systems, and cultural norms.  While technical experts can help integrate and deploy selected solutions, it will be imperative that domain experts are sufficiently fluent in the capabilities and limitations of emerging solutions to know what approach is most appropriate and understand the risks of algorithms making the wrong decisions.

The Pendulum Swings Again:  Adjust Your Talent Priorities Accordingly

Value of Expertise as Analytics MatureLet’s consider a spectrum of skills with very strong technology and analytics expertise on the left and deep domain expertise on the right.  In Gen 1.0 analytics, the greatest value was achieved from those employees who were strong domain experts and had sufficient technology skills to deliver relevant analytics themselves.  However, those analytics skills were relatively straightforward for domain experts to acquire.  The tools were simpler and their capabilities were rudimentary. As evidenced with Gen 2.0 analytics and the rush to hire data scientists, the pendulum swung sharply to the left as technical skills to wrangle and exploit rapidly evolving data and technology capabilities were at a premium, and far more difficult for domain experts to acquire.  As we head into Gen 3.0, expect the pendulum to swing back towards the center as data and analytics capabilities become commoditized and domain expertise is increasingly required to correctly identify and apply readily available solutions.  We are early enough in the development of Gen 3.0 analytics that the government can adjust its hiring and contracting priorities to develop individuals and teams with deep domain expertise and emerging technology fluency, and use their skills to effectively navigate and extract value from the rapidly evolving Gen 3.0 analytics landscape.

 

Michael Whitaker, PhD

Michael “Whit” Whitaker is Vice President of Emerging Solutions for ICF International. He was co-founder of Symbiotic Engineering, and specializes in use of advanced data analytics to deliver actionable intelligence. Dr. Whitaker has a Ph.D. in Civil Engineering from the University of Colorado–Denver and an M.S. and a B.S. in Civil and Environmental Engineering from Stanford University.

 

 

 

Leader Development is Not a Luxury

Federal News Radio’s Jason Miller had a story on April 2 with the headline “Better trained supervisors key to improving morale.” Jason reported on WFED’s CHCO survey and an interview Francis Rose conducted with NASA CHCO Jeri Buchholz. The CHCO survey and Jeri stressed the need for leader development as a means of improving employee morale. I believe Jeri and my former CHCO colleagues are spot on. Absent significant investment in developing the leadership abilities of supervisors, the Federal government is going to have morale and performance issues for years to come.

I have heard comments from folks who say the emphasis on leader development and the role of leaders in driving Federal Employee Viewpoint Survey (FEVS) results is an indictment of supervisors. Nothing could be further from the truth. If it is an indictment of anything, it is the culture that says investing in supervisor training is a waste of time and money. That culture has resulted in budget cuts for training programs, a lack of emphasis on developing the so-called “soft skills” of leadership, and a belief that mission-related training is always more valuable than leader development.  Such beliefs harm agencies terribly. Here is why.

Supervisors drive culture and morale. Other than demographic questions, the FEVS has 84 questions. Of those, 65 are under the control of supervisors and managers. Here is are the 2013 FEVS Questions with the 65 highlighted. So why not blame the supervisors? Easy – it is generally not their fault. For the most part, people are selected for supervisory jobs based upon their technical skills. If we are filling a basket weaver supervisor, we generally look at the basket weavers and pick the one the selecting official believes is the best basket weaver. In many cases there is little real consideration, and certainly no structured assessment, of that person’s leadership abilities. Once they are selected, we put them into a job that requires a completely different skill set from basket weaving and give them little, if any, real training to develop that new skill set. Many agencies send supervisors to a class that is called supervisory training, but it is really just training supervisors on the basics of writing job descriptions, using the rating system, and other basic HR-related skills. The “soft skills” are notably absent in many of these programs. So – we select people who are very good at what they do, but not at what we are selecting them for, do little to develop them, and then blame them for our problems. It seems that is grossly unfair to the supervisors and the people they supervise.

It isn’t that there is no interest by supervisors in real training. At the Defense Logistics Agency, we implemented a comprehensive program for newly selected supervisors. It was so successful, we started getting complaints from people who had been in supervisory jobs prior to the program’s start asking why they could not have the same training. It was clear these folks wanted to do a good job. They wanted the training. Our response was to create a “retrofit” program to give them similar training.

If there is clearly a demand and a need, why does real leader development not happen? For many agencies, it is because leader development is not treated as a budget priority. With shrinking budgets and everyone competing for a diminishing pot of dollars, tradeoffs have to be made. Training has not traditionally been viewed as one of the priorities, and leader development has drawn the short straw when the limited training dollars are allocated. There is often a mistaken belief that it would appear selfish for agency leadership to devote dollars to training supervisors when their employees are not getting the training they need. I believe that view, while it is based on the best of intentions, actually harms the very employees it is trying to protect. If employee views are so dramatically shaped by the quality of supervisors as shown by the FEVS, investing in leaders is investing in employees. In addition to the benefits for employees, there is also a benefit to customers of the agency. At DLA, we conducted both employee and customer surveys. We found a very strong correlation between our employees’ views and how our customers rated the quality of support they got from DLA. On some questions, such as “I have the information I need to do my job,” the correlation coefficient was +.90 or better. It was clear that how we treated our employees was directly related to how our customers perceived the service they got from DLA.

With supervisory skills being so directly related to the FEVS results, and employee perceptions being so directly related to customer outcomes, it is clear that developing leaders is not a luxury. It is not a selfish use of precious resources for supervisors and managers’ own benefit. It can and will drive agency results and make the government a better employer. That makes it a necessity.