Six Organizational Models for Data Science

Setting a team up for success or failure The post Six Organizational Models for Data Science appeared first on Towards Data Science.

Mar 21, 2025 - 05:49
 0
Six Organizational Models for Data Science

Introduction

Data science teams can operate in myriad ways within a company. These organizational models influence the type of work that the team does, but also the team’s culture, goals, Impact, and overall value to the company. 

Adopting the wrong organizational model can limit impact, cause delays, and compromise the morale of a team. As a result, leadership should be aware of these different organizational models and explicitly select models aligned to each project’s goals and their team’s strengths.

This article explores six distinct models we’ve observed across numerous organizations. These models are primarily differentiated by who initiates the work, what output the data science team generates, and how the data science team is evaluated. We note common pitfalls, pros, and cons of each model to help you determine which might work best for your organization.

1. The scientist 

Prototypical scenario

A scientist at a university studies changing ocean temperatures and subsequently publishes peer-reviewed journal articles detailing their findings. They hope that policymakers will one day recognize the importance of changing ocean temperatures, read their papers, and take action based on their research.

Who initiates

Data scientists working within this model typically initiate their own projects, driven by their intellectual curiosity and desire to advance knowledge within a field.

How is the work judged

A scientist’s output is often assessed by how their work impacts the thinking of their peers. For instance, did their work draw other experts’ attention to an area of study, did it resolve fundamental open questions, did it enable subsequent discoveries, or lay the groundwork for subsequent applications?

Common pitfalls to avoid

Basic scientific research pushes humanity’s knowledge forward, delivering foundational knowledge that enables long term societal progress. However, data science projects that use this model risk focusing on questions that have large long term implications, but limited opportunities for near term impact. Moreover, the model encourages decoupling of scientists from decision makers and thus it may not cultivate the shared context, communication styles, or relationships that are necessary to drive action (e.g., regrettably little action has resulted from all the research on climate change). 

Pros

  • The opportunity to develop deep expertise at the forefront of a field
  • Potential for groundbreaking discoveries
  • Attracts strong talent that values autonomy

Cons

  • May struggle to drive outcomes based on findings
  • May lack alignment with organizational priorities
  • Many interesting questions don’t have large commercial implications

2. The business intelligence 

Prototypical scenario

A marketing team requests data about the Open and Click Through Rates for each of their last emails. The Business Intelligence team responds with a spreadsheet or dashboard that displays the requested data.

Who initiates

An operational (Marketing, Sales, etc) or Product team submits a ticket or makes a request directly to a data science team member. 

How the DS team is judged

The BI team’s contribution will be judged by how quickly and accurately they service inbound requests. 

Common pitfalls to avoid

BI teams can efficiently execute against well specified inbound requests. Unfortunately, requests won’t typically include substantial context about a domain, the decisions being made, or the company’s larger goals. As a result, BI teams often struggle to drive innovation or strategically meaningful levels of impact. In the worst situations, the BI team’s work will be used to justify decisions that were already made. 

Pros

  • Clear roles and responsibilities for the data science team
  • Rapid execution against specific requests
  • Direct fulfillment of stakeholder needs (Happy partners!)

Cons

  • Rarely capitalizes on the non-executional skills of data scientists
  • Unlikely to drive substantial innovation
  • Top talent will typically seek a broader and less executional scope

3. The analyst 

Prototypical scenario

A product team requests an analysis of the recent spike in customer churn. The data science team studies how churn spiked and what might have driven the change. The analyst presents their findings in a meeting, and the analysis is persisted in a slide deck that is shared with all attendees. 

Who initiates

Similar to the BI model, the Analyst model typically begins with an operational or product team’s request. 

How the DS team is judged

The Analyst’s work is typically judged by whether the requester feels they received useful insights. In the best cases, the analysis will point to an action that is subsequently taken and yields a desired outcome (e.g., an analysis indicates that the spike in client churn occurred just as page load times increased on the platform. Subsequent efforts to decrease page load times return churn to normal levels).

Common Pitfalls To Avoid

Analyst’s insights can guide critical strategic decisions, while helping the data science team develop invaluable domain expertise and relationships. However, if an analyst doesn’t sufficiently understand the operational constraints in a domain, then their analyses may not be directly actionable. 

Pros

  • Analyses can provide substantive and impactful learnings 
  • Capitalizes on the data science team’s strengths in interpreting data
  • Creates opportunity to build deep subject matter expertise 

Cons

  • Insights may not always be directly actionable
  • May not have visibility into the impact of an analysis
  • Analysts at risk of becoming “Armchair Quarterbacks”

4. The recommender

Prototypical scenario

A product manager requests a system that ranks products on a website. The Recommender develops an algorithm and conducts A/B testing to measure its impact on sales, engagement, etc. The Recommender iteratively improves their algorithm via a series of A/B tests. 

Who initiates

A product manager typically initiates this type of project, recognizing the need for a recommendation engine to improve the users’ experience or drive business metrics. 

How the DS team is judged

The Recommender is ideally judged by their impact on key performance indicators like sales efficiency or conversion rates. The precise form that this takes will often depend on whether the recommendation engine is client or back office facing (e.g., lead scores for a sales team).  

Common pitfalls to avoid

Recommendation projects thrive when they are aligned to high frequency decisions that each have low incremental value (e.g., What song to play next). Training and assessing recommendations may be challenging for low frequency decisions, because of low data volume. Even assessing if recommendation adoption is warranted can be challenging if each decision has high incremental value.  To illustrate, consider efforts to develop and deploy computer vision systems for medical diagnoses. Despite their objectively strong performance, adoption has been slow because cancer diagnoses are relatively low frequency and have very high incremental value. 

Pros

  • Clear objectives and opportunity for measurable impact via A/B testing
  • Potential for significant ROI if the recommendation system is successful
  • Direct alignment with customer-facing outcomes and the organization’s goals

Cons

  • Errors will directly hurt client or financial outcomes
  • Internally facing recommendation engines may be hard to validate
  • Potential for algorithm bias and negative externalities 

5. The automator

Prototypical scenario

A self-driving car takes its owner to the airport. The owner sits in the driver’s seat, just in case they need to intervene, but they rarely do.

Who initiates

An operational, product, or data science team can see the opportunity to automate a task. 

How the DS team is judged

The Automator is evaluated on whether their system produces better or cheaper outcomes than when a human was executing the task.

Common pitfalls to avoid

Automation can deliver super-human performance or remove substantial costs. However, automating a complex human task can be very challenging and expensive, particularly, if it is embedded in a complex social or legal system. Moreover, framing a project around automation encourages teams to mimic human processes, which may prove challenging because of the unique strengths and weaknesses of the human vs the algorithm. 

Pros

  • May drive substantial improvements or cost savings
  • Consistent performance without the variability intrinsic to human decisions
  • Frees up human resources for higher-value more strategic activities

Cons

  • Automating complex tasks can be resource-intensive, and thus low ROI
  • Ethical considerations around job displacement and accountability
  • Challenging to maintain and update as conditions evolve

6. The decision supporter

Prototypical scenario

An end user opens Google Maps and types in a destination. Google Maps presents multiple possible routes, each optimized for different criteria like travel time, avoiding highways, or using public transit. The user reviews these options and selects the one that best aligns with their preferences before they drive along their chosen route.

Who initiates

The data science team often recognizes an opportunity to assist decision-makers, by  distilling a large space of possible actions into a small set of high quality options that each optimize for a different outcomes (e.g., shortest route vs fastest route)

How the DS team is judged

The Decision Supporter is evaluated based on whether their system helps users select good options and then experience the promised outcomes (e.g., did the trip take the expected time, and did the user avoid highways as promised).

Common pitfalls to avoid

Decision support systems capitalize on the respective strengths of humans and algorithms. The success of this system will depend on how well the humans and algorithms collaborate. If the human doesn’t want or trust the input of the algorithmic system, then this kind of project is much less likely to drive impact. 

Pros

  • Capitalizes on the strengths of machines to make accurate predictions at large scale, and the strengths of humans to make strategic trade offs 
  • Engagement of the data science team in the project’s inception and framing increase the likelihood that it will produce an innovative and strategically differentiating capability for the company 
  • Provides transparency into the decision-making process

Cons

  • Requires significant effort to model and quantify various trade-offs
  • Users may struggle to understand or weigh the presented trade-offs
  • Complex to validate that predicted outcomes match actual results

A portfolio of projects

Under- or overutilizing particular models can prove detrimental to a team’s long term success. For instance, we’ve observed teams avoiding BI projects, and suffer from a lack of alignment about how goals are quantified. Or, teams that avoid Analyst projects may struggle because they lack critical domain expertise. 

Even more frequently, we’ve observed teams over utilize a subset of models and become entrapped by them. This process is illustrated in a case study, that we experienced: 

A new data science team was created to partner with an existing operational team. The operational team was excited to become “data driven” and so they submitted many requests for data and analysis. To keep their heads above water, the data science team over utilize the BI and Analyst models. This reinforced the operational team’s tacit belief that the data team existed to service their requests. 

Eventually, the data science team became frustrated with their inability to drive innovation or directly quantify their impact. They fought to secure the time and space to build an innovative Decision Support system. But after it was launched, the operational team chose not to utilize it at a high rate. 

The data science team had trained their cross functional partners to view them as a supporting org, rather than joint owners of decisions. So their latest project felt like an “armchair quarterback”: It expressed strong opinions, but without sharing ownership of execution or outcome. 

Over reliance on the BI and Analyst models had entrapped the team. Launching the new Decision Support system had proven a time consuming and frustrating process for all parties. A tops-down mandate was eventually required to drive enough adoption to assess the system. It worked!

In hindsight, adopting a broader portfolio of project types earlier could have prevented this situation. For instance, instead of culminating with an insight some Analysis projects should have generated strong Recommendations about particular actions. And the data science team should have partnered with the operational team to see this work all the way through execution to final assessment. 

Conclusion

Data Science leaders should intentionally adopt an organizational model for each project based on its goals, constraints, and the surrounding organizational dynamics. Moreover, they should be mindful to build self reinforcing portfolios of different project types. 

To select a model for a project, consider:

  1. The nature of the problems you’re solving: Are the motivating questions exploratory or well-defined? 
  2. Desired outcomes: Are you seeking incremental improvements or innovative breakthroughs? 
  3. Organizational hunger: How much support will the project receive from relevant operating teams?
  4. Your team’s skills and interests: How strong are your team’s communication vs production coding skills?
  5. Available resources: Do you have the bandwidth to maintain and extend a system in perpetuity? 
  6. Are you ready: Does your team have the expertise and relationships to make a particular type of project successful? 

The post Six Organizational Models for Data Science appeared first on Towards Data Science.