Secure Generative AI Adoption: A Guide with GenAI Accelerator, the Knowledge-Based Solution

Tushar Mehta
Tushar Mehta
|
29 Apr 2024

In the ever-evolving landscape of artificial intelligence, one term that’s been buzzing around boardrooms and tech circles alike is Generative AI (GenAI). Its ability to revolutionise how we augment user experiences, generate insights, and even innovate. Yet, despite its potential, many organisations remain hesitant to fully embrace it. As someone who is day in and day out entrenched in the realm of generative AI working and talking to different clients, I’ve witnessed these apprehensions firsthand.

In one of our previous blogs, my colleague Akash Jattan talked about some of these challenges that are faced by many out there who are contemplating the adoption generative AI. Whether it is ownership of data and intellectual property, or concerns around data leakage, or security concerns around exposure to the internet, these are some of the challenges that has always been on the minds of anyone who is thinking about how to adopt GenAI within their organisation.

Introducing “GenAI Accelerator,” a first to market generative AI secured landing zone architecture product on Microsoft Azure, that is meticulously designed to address these challenges and hesitations. It goes beyond conventional cloud-based solutions, harnessing the power of the innovative Data Mesh architecture with data practices wrapped around it. With GenAI Accelerator, organisations can enjoy the peace of mind that comes with complete data privacy, as valuable information remains securely within their own environment, minimising the risks of leaks or breaches. Moreover, GenAI Accelerator prioritises compliance and best practices, seamlessly integrating with existing data governance protocols to uphold stringent standards of security and legal adherence. With GenAI Accelerator, organisations can unlock the full potential of generative AI while ensuring the utmost protection of their sensitive data assets.

The Architecture

GenAI Accelerator is an automated deployment modular AI landing zone. A modular AI landing zone is a flexible and scalable architecture  and secured environment that enables the deployment and integration of various AI services and applications in different modules based on the organisations requirements and maturity.  For example, if an organisation needs to provide AI services based on the documents in Procurement department, then a modular landing zone can be created to fulfill this requirement of searching through a number of Procurement documents in a secure manner. The GenAI Accelerator enables the power of natural language processing, computer vision, and deep learning to generate customised solutions for different use cases and scenarios.  

The architecture of GenAI Accelerator is built upon three distinct layers: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and finally the Use Case layer.

Infrastructure as a Service (IaaS): At the foundation of the GenAI Accelerator lies the core services which serve as the fundamental building block of architecture. It lets you enable the AI landing zone as an isolated secured subscription aligned to ISO27001, which is the gold standard for information security. The solution leverages private link services and private endpoint connections to enable private address routing. This allows for the removal of public IPs and ensuring data travel via the private express route thus limiting any exposure of company data to the internet.  Authorisations through RBAC allows to manage control access of users and identities as the GenAI accelerator allows seamless integration with the existing enterprise security policies.

At this layer also lies the storage configuration layer that runs on a medallion architecture and employs practical methodologies and distributed storage systems, positioning accelerator as an enabling tool for AI adoption. While it runs on its own storage layer configuration, it also allows integration with the existing data architecture and platform eliminating the effort required for setting up a new data platform. GenAI Accelerator also provides organisations the flexibility and scalability needed to support the demanding computational requirements of generative AI applications. Whether it’s provisioning additional computing power to handle spikes in workload or scaling storage capacity to accommodate growing data volumes, the IaaS layer ensures that GenAI Accelerator can seamlessly adapt to evolving business needs.

Platform as a Service (PaaS): Sitting on top of the IaaS layer is the PaaS layer which serves as the framework for developing, deploying, and managing generative AI applications. Based on the principles of ModelOps, this layer allows to build specifically tailored AI workflows. It allows us to leverage tools like Azure AI Search and Azure document intelligence. This allows developers, even those without machine learning expertise, to integrate AI capabilities into their applications, improving user experiences and creating more intelligent applications.

GenAI Accelerator also provides the flexibility to train and tune Large Language models such as GPT3 or GPT4 and Open AI embedding models. This layer also hosts the vector database for storage of embedding which allows seamless query retrieval. It can also integrate seamlessly into the existing environments. For instance, a model trained in Azure Machine Learning can utilise the computer vision capabilities of Azure AI Services.

Moreover, the PaaS layer facilitates seamless integration with external data sources through secured ingestion pattern, enabling organisations to leverage diverse datasets for training and inference purposes. Advanced data preprocessing capabilities, feature engineering tools, and model versioning mechanisms empower data scientists to iterate rapidly, refine models iteratively, and drive continuous improvement in AI performance.

Use Case Layer: It is a combination of application services layer and the end user experience layer. GenAI Accelerator provides a default deployment of web interface for user interaction and provides a bot like capability for the end users. The end user experience is that of a conversational experience similar to what an online instance of ChatGPT provides. Furthermore, the Use Case layer integrates seamlessly with existing business processes and systems, ensuring alignment with organisational goals and objectives. APIs, web services, and integration frameworks enable interoperability with enterprise applications, data repositories, and analytics platforms Whether it’s generating creative content, automating repetitive tasks, or optimising business processes, the Use Case layer empowers organisations to harness the full potential of generative AI across a myriad of applications and industries.

The Delivery Methodology

At dataengine, we understand the importance of implementing a structured delivery framework to guide organisations through the adoption and deployment of AI solutions. We guide you on your AI journey through a collaborative and iterative, step-by-step process to assess your AI readiness, identify valuable use cases, deploy proof-of-concepts, and finally, refine them into production-ready solutions and build practices for Responsible AI Governance.

Phase 1: AI Readiness: The journey towards AI adoption for any organisation begins with assessing its readiness to embrace generative AI technologies. During the AI Readiness stage, we collaborate closely with stakeholders to evaluate the organisation’s data infrastructure, technical capabilities, and strategic objectives.

Key activities within the AI Readiness stage include:

  • Infrastructure Assessment: Evaluating the organisation’s computational infrastructure, cloud readiness, and data storage capabilities to ensure they can support generative AI workloads effectively.
  • Data Audit: Assessing the quality, quantity, and accessibility of data assets within the organisation, identifying potential gaps or inconsistencies that may impact AI initiatives.
  • Governance & Policy Frameworks: Assessment of Governance & Policy Framework, ensuring alignment with regulatory standards and ethical guidelines, safeguarding data integrity and privacy while maximising the value of generative AI solutions.
  • Use Case Identification: Selecting and priortising high-potential use cases or scenarios where generative AI can deliver tangible business value and impact.
  • Strategic Alignment: Clear roadmap and implementation strategy tailored to the organisation’s unique needs and aspirations. This includes defining key performance indicators (KPIs), establishing governance frameworks, and aligning AI initiatives with overarching business goals.

Phase 2: POC Deployment: With AI readiness established, the next stage in the delivery framework is the deployment of Proof of Concept (POC).

Key activities within the POC stage include:

  • Open AI Access: Open AI services are not available by default for the organisation. Submitting a successful application for the Open AI access is the first step towards deploying the POC.
  • Data Ingestion Pattern design: Designing the ingestion pattern to either allow integration of the generative AI product or data ingestion from various sources into the AI landing zone.
  • Network Configuration: An essential step for the POC deployment is the virtual network configuration with private endpoint configuration to ensure a secure environment.
  • GenAI Accelerator deployment: Automated deployment of GenAI Accelerator product to create a production grade scalable,  ISO27001 aligned secured landing zone for generative AI development, along with the deployment of open AI model and production ready AI bot.
  • Pilot Use case (If taken as an addon): Building and training generative AI models tailored to the selected use cases, leveraging synthetic data or limited production data sets.
  • FinOps Dashboard: A standard dashboard deployment for monitoring of the Generative AI solution ongoing cost.

Phase 3: Use Case Deployment: The final phase focuses on scaling and operationalising generative AI solutions across the organisation’s ecosystem. During the Use Case Deployment stage, successful POCs are transitioned into production-ready deployments, integrated into existing workflows, and aligned with business objectives.

Key activities within this stage include:

  • Solution Scaling: Scaling up generative AI solutions to accommodate larger data sets, increased workloads, and broader user bases, ensuring scalability, reliability, and performance.
  • Integration and Deployment: Integrating generative AI solutions seamlessly into existing IT infrastructure, business processes, and applications, leveraging APIs, connectors, and integration frameworks.
  • Training and Change Management: Providing training and support to end-users, stakeholders, and IT teams to ensure smooth adoption and utilisation of generative AI solutions, driving organisational change and fostering a culture of innovation.
  • Monitoring and Optimisation: Implementing monitoring, tracking, and performance optimisation mechanisms to continuously evaluate the effectiveness and impact of generative AI solutions, iterating on models and workflows to drive ongoing improvement and value realisation.

If you’re interested in finding out more about dataengine’s GenAI Accelerator, visit https://dataengine.co.nz/genai-package-offer/

Secure Generative AI Adoption: A Guide with GenAI Accelerator, the Knowledge-Based Solution
Share this
Picture of Subhashi Randeni

Subhashi Randeni

Tushar Mehta

Tushar is a data enthusiast with a passion for creating top-notch data-driven products and services. As the Head of AI Enablement at dataengine, he leverages his expertise in machine learning, data science, AI, customer journey, and design thinking to bring impactful solutions to life. Tushar has a proven track record of success, having delivered best-in-class products across various industries, including AdTech, MarTech, Telco, Retail, and Ecommerce.

Want to know more about dataengine?
FinOps

A Guide to FinOps and Cloud

Charith-Hettige
Charith Hettige
Working in the office

Enhance the Power of Your Data: 7 Reasons Why You Should Embrace a Centralised Data Hub

akash-jattan
Akash Jattan

Unveiling the Power of Vector Databases in Gen AI Solutions

Tushar Mehta
Tushar Mehta