By Richard Louden, Head of Technology (Data) at Nimble Approach
Many organisations are eager to adopt AI, with a number of the larger platform players marketing key functionality to accelerate this. This blog explores if the reality of these features matches the marketing, by looking at how quickly an agentic AI application can be built and deployed using Databricks.
The Rush for Organisational AI
Organisations everywhere are racing to adopt AI, driven by the promise of enhanced productivity, faster growth, and more efficient, automated decision-making. However, integrating new technology is a difficult task and many organisations struggle to progress up the maturity curve required to deliver real business value from AI. Figure 1 provides an overview. As the greatest value is typically realised at level 3 and above, the key question becomes: how can organisations accelerate their progression up the curve?
For organisations that aren’t natively tech companies, one of the most effective approaches is to build on existing platforms as they introduce increasingly AI-centric capabilities. In the data space specifically, the three major platform providers – Databricks, Snowflake, and Fabric – all now offer built-in capabilities aimed at helping users develop and deploy agentic AI solutions closer to their core data.

Databricks as an Enabler
Given the claims from these platforms, it is worth investigating how easy it can be to develop and deploy an agentic AI application. Given its widespread adoption – with more than 20,000 customers worldwide – as well as my own familiarity with the platform and Nimble’s status as a Databricks partner, Databricks was the obvious choice.
The platform offers a number of capabilities that can enable accelerated AI app development, with an overview provided in figure 2. It is worth noting that I decided to avoid using Lakebase for this application, as the plan was only ever to build out a rapid proof of concept. However, there are clear benefits to including it in production applications, given it offers significantly faster data retrieval and a method to maintain chat history for more context-aware responses.

The Use Case:
The key to delivering value from AI lies in identifying strong use cases, particularly those involving the analysis of multiple data sources to support decision-making, whether with or without human involvement. While this increases the complexity of implementation – requiring organisations to collate, cleanse, and contextualise data to support the AI model – it also creates the opportunity to free staff from repetitive analysis and focus them on more complex work.
With this in mind, I wanted to select a process from an industry I was familiar with, where I could generate the required data to make it somewhat realistic. Eventually, I landed on leakage detection – a process that involves the analysis of sensor, staffing, and asset data to understand where a water pipe may be leaking and whether to commission an investigation.
The Foundations:
In order to create a leakage analysis agent, there first needs to be data to analyse across 5 key areas: sensors, sensor measurements, staff, staff availability, and ongoing maintenance. After identifying the required datasets, I decided that generating them would be a useful test of the level of complexity Genie – Databricks’ in-workspace coding agent – could realistically handle. After describing each dataset and how it would need to be updated, Genie built out an initial set of notebooks that were then iterated upon to get what was required (figure 3).
Further prompts allowed me to create the required pipelines to run these notebooks, as per a requested schedule. After around 30 minutes, I had automated daily updates running for the key fabricated data sources, with only a handful of minor manual corrections needed where Genie had either drifted slightly off track or introduced small errors. After the initial pipelines had run, I had 5 key tables stored in Unity Catalog, which would allow me to drastically simplify the process of providing the data access permissions my future app would need.

Figure 3 – Example output from the Genie coding agent
The Application:
The next part of the process could be seen as the most complex, given the number of required components to create a functional application capable of interacting with LLMs. However, this challenge was significantly reduced through a combination of Databricks-provided application templates and the Genie assistant. After pulling down some starter code and adding the required components to support automated deployment, I had a solid foundation from which to build the application architecture shown in Figure 4.

The remaining effort to get the app in shape took the best part of a day. This involved:
- Reading the databricks documentation to understand how to connect apps to data via specified tools.
- Using Genie to build out the required tools and split them into domain associated files.
- Initial tests of the process and iterations to add new tools for running data pipelines and updating tables.
- Front end updates, via Genie, to remove the generic feel.
This resulted in the application shown in Figure 5, designed to provide quick access to key functionality through shortcut buttons, alongside a chat interface for additional context and more detailed instructions.

Figure 5 – Final UI for the application
In terms of its capability, the resulting application was able to perform a number of simplified tasks that would normally be handled through reporting or human involvement, as outlined below.
- Read and analyse raw sensor information to understand if pressure, flow, or temperature has deviated from a known baseline.
- Create ad hoc SQL queries against data sets based on a user’s questions.
- Analyse staff availability to identify who is free and, based on the potential severity of the leak, assign the most appropriate personnel to investigate.
- Check the status of key ETL pipelines to understand when data was refreshed, and if this may be causing issues with analysis.
However, its real power comes in its ability to stitch these tasks together and automate key aspects. For example, an analyst can be provided a list of potential leaks through a dashboard, but would likely then move to a different view to understand current maintenance activity and a new system to book them in. Instead, the agent can bring together these key pieces of information and generate a proposed action plan, shown in Figure 6, before returning it for approval and interacting with downstream systems to create the required work items.
This pattern of agentic analysis, based on key data and context, in combination with human input and decision making is something that will be key to unlocking efficiencies in the near future. Additionally, the ability for these applications to run ad-hoc analyses, as opposed to fixed dashboards, will provide operators with more relevant and timely insights to improve their decision making.

Figure 6 – Example of combined leakage and staff availability analysis, with proposed job assignments
Closing Thoughts
The ultimate goal of this work was to test how easy it is to develop and deploy an agentic AI application, using one of the established data platforms. While I initially expected a significant amount of friction throughout the process, the reality was quite the opposite – though this was undoubtedly helped by my existing experience with Databricks and having a relatively mature platform already in place.
In terms of its features, the ability to easily create data pipelines, deploy serverless apps, and seamlessly establish the correct permissions were incredibly useful foundations. On top of this, the repository of existing application templates and the power of the Genie coding agent significantly reduced the level of software engineering effort required.
From defining the initial use case to deploying a reasonably functional agent took around a day and a half, which was significantly faster than I had expected. That said, this was heavily supported by having an existing platform already in place, along with established processes for creating new data catalogues and triggering automated deployments when code was updated. It’s also worth pointing out that to move this application into production would take further technical improvements around authentication, networking, and some architectural elements, though this is all possible within the current platform.
With all of this in mind, the marketing of these features does match up to reality, provided an organisation has the experience to architect and develop their Databricks platform to the required level of maturity. Building out agentic functionality quickly is clearly on the agenda for most, but this needs to be tempered with the understanding that the foundations need to be there to enable this.
How Nimble can help
If you are looking to move up the AI maturity curve, but are finding it difficult to progress from CoPilot subscriptions to agentic applications, Nimble Approach can help. Our experts across data, engineering, platforms and AI work with you to build out your AI use cases and bring them into production.














