Jennifer M. Logg, Ph.D.
  • Home
  • Research
  • Papers
  • Psychology of Big Data
  • Logg Lab
  • Press
​I am an Assistant Professor of Management at Georgetown University's McDonough School of Business​.  

My work examines why people fail to view themselves and their work realistically and
 when they are willing to use algorithms to improve their accuracy.

Theory of Machine
​I call my primary line of research, Theory of Machine.  In it, I use a psychological perspective to examine how people respond to the increasing prevalence of information produced by algorithms.  Broadly, this work examines how people expect algorithmic and human judgment to differ.  In a paper called, "Algorithm Appreciation," I examine when people are most likely to leverage the power of algorithmic advice to improve the accuracy of their judgments about the world.  In a project on "Robo-Coaching," I test when people prefer performance feedback from an algorithm or from a person.  More detail on this line of work below.

Overconfidence
My second line of research investigates overconfidence, a pervasive bias.  My work offers new insights into both the antecedents and consequences of these overly positive self-views.  I test the questions: does valuing a particular skill increases people's overconfidence in that skill, does optimism help performance as much as people expect, is overconfidence socially "contagious,"  and does excessive confidence about winning an argument explain why everyone argues but no one is persuaded?

Picture

Theory of Machine

My research examines "big data" from a psychological perspective.  Research on what I call Theory of Machine is needed to keep up with the rapid pace of technological advancement that injects algorithms into many aspects of our lives.  By Theory of Machine (a new twist on the classic theory of mind), I refer to lay theories about how algorithmic judgment works.  My program of work examines people's perceptions of how algorithmic and human judgment differ in terms of their input, process, and output.

Algorithm Appreciation:
People prefer algorithmic to human judgment

Manuscript Published in OBHDP (Job Market Paper)
Even though computational algorithms outperform human judgment, received wisdom suggests that lay people may be skeptical of relying on them (Dawes, 1979).  Counter to this notion, results from six experiments show that lay people adhere more to identical advice when they think it comes from an algorithm than from a person.  People showed this "algorithm appreciation" when making numeric estimates about a visual stimulus and forecasts about the popularity of songs and romantic matches.  Yet, researchers predicted the opposite response.  Algorithm appreciation persisted when advice appeared jointly or separately.  However, algorithm appreciation waned when: people chose between an algorithm’s estimate and their own (versus an external advisor’s) and they had expertise in forecasting.  Paradoxically, national security professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy.  These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.

Picture
Oct. 26, 2018
Stop Naming Your Algorithms:

Do People Trust Algorithms More Than Companies Realize?
 -Harvard Business Review

Thank you to the following for their generous financial support: 
Intelligence Advanced Research Projects Activity (IARPA) 
via the Department of Interior National Business Center (DoI/NBC) Contract No. D11PC20061
UC Berkeley Haas School of Business Dissertation Fellowship

Behavioral Lab at UC Berkeley's Haas School of Business

  • Home
  • Research
  • Papers
  • Psychology of Big Data
  • Logg Lab
  • Press