Key takeaways
- AI can accelerate research and improve productivity, but only when it is grounded in authoritative, trusted tax content.
- Generic AI tools can introduce risk through unreliable or unverifiable outputs, making validation and source transparency essential.
- The most effective tax leaders are using AI selectively – applying it where it adds value while maintaining strong processes for accuracy, security, and oversight.
Tax research has always required precision. What has changed is the speed at which tax departments are now expected to deliver that precision, while navigating an environment defined by constant legal change, growing business complexity, and higher expectations from leadership.
Against that backdrop, artificial intelligence is becoming a bigger part of the tax research workflow.
The opportunity is real. AI can help tax professionals get to relevant authority faster, accelerate first-draft analysis, surface issues they may not have initially considered, and reduce the friction involved in researching unfamiliar topics.
But the enthusiasm around AI also raises an important question: What does responsible, effective AI use actually look like in a tax department?
In this article, you’ll learn how leading tax executives are navigating this moment, as shared during a recent Bloomberg Tax webinar. Their experiences highlight a practical path forward: embrace AI where it adds value, stay disciplined where it falls short, and build processes that balance speed with accuracy.
The new reality: volume, velocity, and expectations
If tax leaders agree on one thing, it’s that their job has fundamentally changed.
The core challenge is not just complexity, but the speed at which that complexity evolves. Regulatory changes are constant, and the expectation is that tax teams will understand and act on those changes almost immediately. This shift has created pressure across research, analysis, and implementation.
“It’s been volume and velocity,” said Tom Donnelly, senior vice president of state and local tax at Comcast. “You know, the volume of changes in the States and the locals and simply the velocity of the changes.”
Donnelly also pointed to the rising expectations that come with it.
“The expectations have changed that you can address things much quicker than you could 10-15 years ago,” he said.
This combination forces tax teams to rethink how they work. It is no longer enough to be thorough. Teams must also be fast, and they must deliver answers that can be operationalized quickly across systems and processes.
This shift creates pressure across the entire workflow:
- Finding relevant information fast
- Interpreting complex legislation
- Applying changes to systems and processes
- Delivering clear answers to stakeholders
Want to hear the complete discussion? View the full on-demand webinar to explore practical strategies, risks, and real-world use cases.
AI can improve tax research, but only when it is grounded in trusted sources
Tax research is not simply a matter of finding a headline and moving on. As Donnelly noted, bill headers and summaries do not always tell the full story. Sometimes a deeper review reveals a seemingly minor provision with outsized implications for planning, compliance, billing systems, or reporting.
The challenge is not just finding information. It is finding the right information quickly enough to make sound decisions before the business needs an answer.
Matt Zajac, senior manager of tax advisory planning at DoorDash, pointed to the same pressure from another angle: speed expectations have risen, but the tools teams rely on do not always produce reliable answers.
A basic web search may be quick, but it is rarely comprehensive or sufficiently authoritative for professional tax work. Generic AI tools can make that problem worse by generating polished answers without clearly showing whether they are grounded in vetted tax sources.
That creates a dangerous mismatch. Business leaders know AI exists and may assume that answers should now come instantly. But in tax, fast answers are only useful if they are correct, supportable, and tied to authoritative content.
Generic AI tools vs. comprehensive tax-specific tools
“A tool like Copilot or ChatGPT is effectively doing a Google search,” Zajac said. “Whereas in Bloomberg Tax, independent of the AI tool, you can find case law, code, regs, PLRs, memos, charts. Their tool is going to search just those sources.
“And what I found incredibly refreshing is if it doesn’t know the answer, it will tell you. I’ve never seen ChatGPT or Copilot say I don’t know. It will always try to tell me something, which is what I get nervous about, because often times it doesn’t know.”
Will Matthews, director of product management at Bloomberg Tax, explained that the system is designed to prioritize verifiability and limit unsupported outputs.
“We have guardrails in place to make sure we’re only answering questions if we can find the answer in our own content,” Matthews said. “We don’t show a response unless we can find that document and substance within that document that backs up every statement our system wants to make.”
For tax professionals, this means every assertion can be traced back to underlying source material, making it easier to validate and defend conclusions.
Watch a clip from the webinar, Growing Through Complexity, to hear experts discuss identifying and validating sources used by generative AI for tax research.
Where AI is making a real difference in tax research
AI is already reshaping how tax teams approach research and analysis, particularly in areas where speed and synthesis are critical.
Common areas where AI is delivering value include:
- Drafting memos and documentation
- Summarizing legislation and guidance
- Reviewing contracts and legal language
- Providing structured explanations of tax concepts
For Zajac, that includes tax advisory tasks such as researching technical questions, drafting documents, and reviewing legal materials. In those environments, AI can help accelerate the first stages of analysis by organizing source material, summarizing key issues, and generating a starting point for deeper review.
The most immediate impact is not full automation, but acceleration. AI helps teams get to a strong starting point much faster than traditional methods allow.
“We have the ability on our desktops to get a half a really decent answer, not a final answer, but a really good starting place,” Donnelly said.
This change has practical implications for how work gets done. Instead of outsourcing early-stage research, teams can begin internally with a structured draft, then engage advisors later for validation and refinement. That shift increases both cost efficiency and internal capability.
Watch a clip from the webinar, Growing Through Complexity, to hear experts discuss how generative AI has changed the tax research process.
AI is also improving how teams explore complex issues. It can surface related authorities, concepts, and interpretations that might not be immediately obvious.
Another meaningful benefit is its impact on learning and development. AI tools can act as an always-available resource for junior staff, helping them understand foundational concepts and build confidence more quickly.
“It’s able to make you a professional very quickly in a potentially a new topic area,” Zajac said.
Used well, AI can meaningfully improve the research process. But the distinction between a general-purpose AI tool and a tax-specific research solution is critical.
Matthews emphasized that professionals need more than a generated answer. They also need a way to make their findings “tangible and retrievable.”
That means the system has to do more than summarize. It has to connect the answer to supporting authority, preserve the research trail, and help the user return to the relevant material later.
The AI Assistant within Bloomberg Tax Research will always identify the resources used to generate the response to the question and provide a direct link to the document.
“With generative AI, instead of just identifying relevant documents, we can combine our understanding of those documents with the user’s query to generate new content,” Matthews said.
“This output is a synthesis of patterns found in our existing materials, along with the intelligence we’ve applied in building our system with the content experts and technologists here at Bloomberg Tax.”
See how leading corporate tax teams are using AI today – watch the full webinar for deeper insights and examples.
AI can accelerate learning and improve quality control
Another important opportunity lies in training and knowledge development.
Donnelly has found AI useful for helping more junior professionals learn technical concepts more quickly. In effect, it can act like a knowledgeable guide sitting beside them, explaining topics such as apportionment or recognition and helping them build baseline understanding before escalating questions internally.
That does not eliminate the need for senior review, but it can make junior team members more productive and more confident sooner.
AI can also serve as a quality-control layer. Donnelly described using more than one model to compare answers and surface inconsistencies, often using Bloomberg Tax as a second check against more general-purpose large language models.
In his experience, the curated data set behind a tax-specific tool has real value when it comes to verifying whether a citation is actually on point or whether a case referenced by a general model is being described accurately.
Better prompts lead to better outputs
One of the most important lessons emerging from early AI adoption in tax is that results depend heavily on how the user frames the question.
Detailed, narrow prompts
Donnelly has become a strong advocate for detailed prompting. He noted that tax teams do need to be trained on how to work with AI effectively. Broad, vague requests tend to generate broad, vague answers. Narrower prompts with more context usually produce much better results.
His recommendation is to keep projects focused and break larger assignments into smaller parts. If a memo involves several distinct issues, it is often better to handle each issue separately rather than asking the model to produce one sweeping answer. Asking too much at once can degrade quality.
Insist on neutrality
He also recommends explicitly telling the model not to simply confirm the user’s position. A better instruction is to ask for the arguments on both sides, identify the strongest view, and explain why. That structure can help counter a common weakness in some general AI systems: the tendency to sound agreeable and produce the answer the user seems to want.
Zajac offered a similar warning against leading questions.
“If you ask it, ‘A cheeseburger sold in Colorado is taxable, right?’, it’s more likely to just tell you, ‘yeah, sure’, because even if it’s not true, it’s going to find a reason or explanation for why you’re right,” Zajac explained. “Instead you can ask it more neutrally, ‘What is the taxability of a cheeseburger in Colorado?’ You’re much more likely to get a balanced response.”
Personalized fact patterns as context
He also described a useful practical habit: maintaining a reusable prompt that explains the company’s fact pattern.
By pasting that context into relevant conversations, a tax professional can help the system tailor its answers more effectively. Over time, that prompt can be refined as the user sees where the model tends to misunderstand the facts.
Other helpful techniques include asking for quantitative examples, requesting a confidence level, and using follow-up questions to pressure-test the initial answer. In tax, the first answer should rarely be the last stop.
Where tax teams should be cautious with AI
For all its promise, AI is not the right tool for every research task. In many cases, a direct link to the relevant code section, regulation, or chart is still the better route.
The key is not to force AI into every workflow. It is to use it where it genuinely adds value.
Experts identified three areas where traditional methods remain more reliable.
1. Precise tax calculations
AI also struggles with calculations. Tax work often requires precise, repeatable outputs, while AI models are inherently probabilistic and still struggle with simple math.
This mismatch means that even when AI produces a reasonable estimate, tax professionals still need to validate the result using trusted calculation tools to ensure accuracy and consistency.
“AI is probabilistic and in tax, we’re used to deterministic calculation engines,” Donnelly said. “If you ask AI, ‘What’s 1+1?’ and you ask it 1,000 times, you might get shades because it’s giving you the probabilities.”
2. Analyzing highly structured documents, like tax returns
Highly structured, multi-hundred-page documents present another challenge. Complex filings, such as tax returns, are difficult for AI to process accurately due to their density and format.
“The tax return is so darn complicated that the AI has a real problem with that,” Donnelly said.
3. Monitoring regulatory updates
Both Zajac and Donnelly remain cautious about trusting AI for tax developments monitoring. Tax departments still rely heavily on established services like Bloomberg Tax Research to keep up with legislative and regulatory changes, and for good reason.
When the goal is to know what changed and be confident that nothing important was missed, incompleteness is a serious risk.
If an AI-generated update skips something material, the user may have no easy way to know what was omitted. That makes it different from drill-down research, where a professional can examine the answer, test it, and validate the cited support.
Bloomberg Tax has built a solution to this challenge – with its Developments Tracker Agent, users can scan for legislative developments that impact their workpaper.
The agent analyzes the workpaper to identify the relevant tax law, then checks Bloomberg Tax Research content for any recent legislative developments. It will surface related updates and provide linked citations so users can validate the source law.
Within Bloomberg Tax Workpapers, the Development Tracker Agency uses AI to surface related updates and provide linked citations so users can validate the source law
Managing risk: security and trust
Adopting AI introduces new risks that tax leaders must actively manage. These risks are not theoretical. They affect the integrity of tax advice, the security of sensitive data, and the level of trust stakeholders place in the tax function. As AI becomes more embedded in daily workflows, leaders need to be deliberate about how these tools are governed and used.
Data security is a critical concern, and it is often the first question tax leaders raise when evaluating AI tools. Organizations need to understand how their data is handled, where it is stored, and whether it is being used to train underlying models.
Donnelly emphasized the importance of internal discipline alongside tool selection.
“Don’t gratuitously feed PII or business confidential information into the LLM if you don’t need to,” he said.
Legal considerations, including privilege, add another layer of complexity. The implications of sharing sensitive discussions with AI tools are still evolving, and organizations need to proceed carefully.
“The conversation with the attorney is probably privilege, but the minute you’ve divulged it to a LLM, that becomes a problem,” Donnelly said.
Bloomberg Tax has robust security controls and limits how user data is handled within its system.
“We do not send information, whether it’s your query prompts or now your document uploads outside of our system,” Matthews said. “And we never train any LLM with any user information.”
These safeguards are designed to address one of the biggest barriers to AI adoption in tax departments: trust. When users know that their inputs remain contained and that outputs are grounded in vetted content, they are more likely to integrate AI into their workflows.
To manage these risks effectively, tax leaders should focus on a combination of technology and process. Even the most secure platform requires clear internal policies and user awareness to be effective.
Key risk management priorities include:
- Verifying all AI-generated outputs before relying on them
- Limiting the use of sensitive or confidential data in prompts
- Choosing tools with strong security and transparency controls
- Establishing internal guidelines for responsible AI use
- Training teams to understand both capabilities and limitations
A thoughtful approach to risk does not slow adoption. It enables it. By putting the right controls in place, tax leaders can take advantage of AI’s benefits while maintaining the standards of accuracy and confidentiality their organizations expect.
What effective adoption looks like now: 6 steps to take
For tax leaders thinking about how to introduce AI into research workflows, the practical path is becoming clearer.
- Start with use cases where the work is heavily textual and the gains are easiest to measure, such as issue exploration, document review, drafting support, or concept explanation.
- Use tools grounded in authoritative tax content, not just broad internet sources.
- Require citation review and source validation.
- Set clear rules for confidentiality and data handling.
- Train users on prompting and on how to ask neutral, fact-specific questions when prompting.
- Keep traditional tools in place where they are still better suited to the task.
The teams getting the most value from AI are not the ones using it everywhere. They are the ones using it deliberately.
Go beyond the highlights. Watch the full on-demand webinar to learn how experts are balancing speed, accuracy, and risk with AI.
Smarter research workflows with Bloomberg Tax
Tax research is unlikely to get easier. The volume of information will keep growing. The pace of change will remain high. And pressure for faster answers will continue.
That is exactly why AI matters.
Bloomberg Tax is designed to meet this moment with AI that is purpose-built for tax professionals, not adapted from general use. It combines authoritative primary and secondary sources with advanced AI capabilities to deliver answers that are fast, transparent, and grounded in trusted content.
Features like Deep Thinking mode support complex, multi-step research by asking clarifying questions and producing more complete, structured outputs. Document-based analysis allows teams to upload their own materials and receive tailored insights, while integrated workflows connect research directly to workpapers, as well as compliance and planning tasks.
For tax leaders, the goal is not just to adopt AI, but to adopt it in a way that strengthens decision-making and supports long-term growth. Bloomberg Tax provides a solution that aligns with that goal – delivering trusted answers, streamlining research, and helping teams operate at a higher level.
Request a demo today to see how Bloomberg Tax can help your team move faster, reduce risk, and confidently navigate complexity.