
Priyaanka Arora
July 23, 2025
The Relationship Between Data Science and Decision Making
What is the point of data science if not to power data-driven decisions?
Data science teams are often pulled into big decisions. The thinking is that data makes those choices more trustworthy and repeatable. But even perfect data might not be enough to make a call on something as fluid as a business decision.
A 2023 Oracle study found that 72% of leaders have avoided making a decision because they didn’t trust the data. 86% said too much data made things harder, not easier. So data isn’t the bottleneck. Clarity is.
The real strength of data science might be in building that clarity. Not in deciding what to do, but in laying solid foundations: pattern detection, uncertainty modeling, reproducible analysis, and data that’s easy for others to interpret and use.
Data scientists are usually closest to the data and furthest from the decision. That distance matters. So does the fact that most decisions factor in things that aren’t in the data at all.
Maybe the value of data science isn’t in the decision, but in how well the data was set up to support one.
What actually feeds a decision?
There’s more data available than ever. Structured data, transaction logs, clickstreams, freeform text, audio transcripts, customer support chats, sensor readings. Most companies aren’t short on inputs. What’s missing is shared understanding.
Getting to that understanding isn’t as simple as publishing a dashboard. Metrics need consistent definitions. Some gaps can’t be filled, like missing competitive intelligence or soft signals from leadership that never made it into a CRM. Different tools give different slices of the truth, and decisions get made in between those slices.
Even when the data is clean, context shapes interpretation. A 3% lift in a metric might look promising to one team and underwhelming to another, depending on what they’re optimizing for. A drop in engagement might be a sign of quality filtering, or just bad timing. Data doesn’t resolve that tension on its own.
A model can’t know if the board already has expectations for next quarter. Or if the product team promised a feature at a customer event. Those circumstances weigh heavily, and data might inform the debate, but doesn't provide that 100% picture to close it.
Should data scientists offer recommendations along with analyses?
It’s tempting to say yes. After all, a recommendation gives the work closure. It signals confidence. It helps stakeholders move forward. But I think recommendations can muddy the art of the decision, suppress intuition. Trying to package recommendations for decisions puts unfair pressure on the data scientist to own something they can’t fully see. This is something I struggled with as a junior data analyst in a former life.
I’m starting to think now: the idea of a "neutral" recommendation is mostly fiction. To recommend is to imply a value judgment, and value judgments depend on tradeoffs, constraints, and goals that are often beyond the analyst’s line of sight. Even when intentions are good, the optics can create tension, especially if the recommendation cuts against leadership’s stated direction or reveals a misalignment.
In some orgs, data scientists try to soften the risk. They preview results, calibrate their framing, or co-author conclusions with PMs or leads. This can work. But it moves the job further from its core: show what’s there, explain what it means, and stop before assuming what should happen next.
The case for owning decisions
There’s a counterargument worth considering: in some environments, data scientists might be the best-positioned people to make a recommendation or decision. This is especially true when the data work is highly technical, the turnaround time is short, or the downstream impact is quantifiable. For example, in experimentation platforms or ML-driven systems, waiting on a cross-functional decision loop can delay outcomes without improving them.
In these cases, deferring the decision may not make it more informed, it just makes it slower. If a data scientist has clear metrics, clean data, and a decision structure in place, stepping up to recommend or act can create efficiency. It also signals confidence and accountability, especially in organizations trying to move faster or decentralize authority.
The key is to know the boundaries. Making decisions where the impact is measurable and well-scoped is different from trying to drive strategy or navigate politics. When data scientists own the outcomes in their domain, they can tighten the feedback loop and reduce ambiguity for others.
Clarity over conclusion
There’s a better way to contribute. Step back from the impulse to resolve ambiguity. Instead, give others the clearest possible picture of what the data shows, what it doesn’t, and where the limits are. Focus on:
- What changed in the data?
- What might explain it?
- How confident are you in the signal?
- What’s missing or unclear?
- Where are the blind spots or dependencies?
These are the questions that let others connect the dots. Your role is to build a clear lens, not to dictate the view. That clarity builds trust and it helps others make better calls without asking you to do their job.
This way, a business can leverage the piece of the process that data science is uniquely positioned to lead, without collapsing everything into a takeaway. Some questions don’t have a clean answer. Some findings point in multiple directions. And that’s fine as long as data science brings clarity.
The role of interactive tools
Data apps are useful here. Unlike static decks, an interactive app lets people explore a question from multiple angles. They can adjust filters, swap baselines, or dig into an outlier. They can test different business scenarios and see how conclusions hold up. Dash apps do this well. With new advancements like Plotly Studio, it is easier than ever to even generate an entire interactive data app with AI agents.
The best part about apps is how they create space for shared exploration across levels of expertise and job functions. They let decision-makers and domain experts move through uncertainty instead of being handed a fixed view. This keeps the data team involved, without putting them on the hook for the outcome.
When built right, a data app makes decisions easier to make, and instigates action.
The storytelling dilemma
“Data storytelling” gets a lot of attention. Done well, it makes insights stick. But there’s a risk in making every dataset a story. Real insight isn’t always narrative-friendly. It’s jagged, partial, sometimes inconclusive. A story forces shape onto something that may still be taking form.
Some of the best data scientists are careful not to oversimplify. They don’t trim nuance to match the tone of a meeting, and they avoid shaping conclusions to fit expectations. They stay close to what the data actually supports, even when it’s messy or uncertain. That kind of honesty might not always land immediately, but over time, it earns respect and credibility.
This tension has been around for a while. There’s often more visibility and praise for those who can turn data into a tidy story than for those who stay grounded in its complexity. But strong analysis doesn’t always lend itself to clean arcs. Some of the most thoughtful and technically sound work comes from people who resist the urge to wrap everything up in a narrative, because they know the story might flatten what matters.
That kind of rigor depends on solid fundamentals. Data fluency. Clean joins. Transparent logic. Well-documented assumptions. It’s not flashy. But it holds up under pressure and eases your goal to explain how things change under different assumptions.
Plotly tools are built for this kind of work. If you’re looking to sharpen your practical skills, we built a DataCamp course with that in mind.
Learn Plotly and Dash on DataCamp
In this self-paced course, you’ll:
- Build charts with Plotly (box plots, histograms, and more)
- Create a full Dash app, no web experience needed
- Use Dash AG Grid to build connected tables
- Combine inputs and outputs in one app
- Build interfaces for testing hypotheses, not just reporting metrics
The course includes hands-on examples that are useful both for beginners and for those looking to build more advanced tools.
Data or decisions?
So the controversial take of the day is: data scientists shouldn’t try to make, or influence, the final call. They should take responsibility for the quality, clarity, and interpretation of the data. When that piece is handled well, the decision-makers around them have what they need to move forward and know who to return to when things shift.
I’ve found that discipline and a fundamental understanding of data matters. It’s easy to reach for a bold recommendation to sound decisive, but what really builds long-term trust is taking the time to design tools, analyses, and outputs that help others understand the tradeoffs for themselves.