Data-intensive applications are an essential part of the B2B landscape. They power more and more vital parts of the enterprise world. Marketing, Sales, IT, Engineering, and HR are all increasingly data-dependent segments in a business – as evidenced by how many products are being funded and built to target these data-hungry users. And while companies like Snowflake have created fantastic data solutions, the insights provided by the data/AI/ML are only valuable when exposed to end users in an intuitive way that enables them to investigate, understand and take actions in response to their changing data landscapes.
In short, humans need a front-end for these data-intensive applications. And the expectation for usability is growing, not shrinking.
But these data-intensive apps are not prescriptive – they are definitely not cookie-cutter or one size fits all. You cannot build a linear workflow or wizard to help users investigate and remediate their data – at least not for a broad range of data originating from different sources and being used by different kinds of end users in different businesses. End users will use data products a little differently every time; following different connections, making different decisions – all based on the data they see. These investigations/explorations are unstructured (or at least non-linearly structured) tasks. And as such, the UI and UX for these products must also support unstructured data exploration that could start at multiple places, traverse multiple different connections, and conclude at multiple endpoints.
Creating this type of application is a daunting task. One that involves configuration, CRUD (create, read, update, delete) operations, actions, data visualization (of course), and most importantly, a deeply interconnected set of pages and views that can respond to the changing needs of the end user. For most complicated data, using templates does not work as these applications must reflect specific concepts and the interconnected relationship of those concepts to give the users any chance of finding the kind of patterns they need to create or test their hypotheses.
Allowing a product to deal with this level of interconnectedness is the key to supporting the common categories of workflow a data-intensive product needs; monitor/browse – investigate/drill-in – remediate/take-action. The resulting complexity is also the most common frustration point. How many times have you seen something in a product and wanted to drill down?
“Oh this expense report from Steve looks high – let me see what items he is expensing?”
Only to find out it isn’t a link
“Damn, ok – let me write down this expense report name, then i’ll navigate to the report management page… and I’ll filter and find that report, ok – wait for that to load. Ok…”
And now you need to drill in again, and again and again… but your context is lost. You can’t even rely on your browser history because it is cluttered with all these intermediate pages. Every one of those links must be hand-coded. Every visualization of any relationship that helps you make such a connection must be manually connected by an engineer.
This is a constant failing of these data-intensive applications. A new way of building data-intensive applications is needed – one where creating connections is not just easy but automatic. A process by which generating most grids/tables, common visualizations, enabling actions/CRUD-operations is simple (and maybe even generated/facilitated automatically). A platform that can facilitate and generate the front-end infrastructure that takes forever to build (theming, internationalization, UI access control, navigation, etc) freeing up resources to create domain-specific and product-specific UI and UX. A solution that allows you to respond to customers and update your UI/UX in hours, allowing your talented staff to focus on the roadmap and adding “sizzle” and innovation to your product.
So what does this mean for developing and maintaining these data-intensive applications?