With the latest technological advancements to solve large complex business problems, organizations are looking at building and deploying Intelligent Automation solutions packaged all in one to inherit capabilities of BPM, RPA, AI/ML, and GenAI to give them an enhanced CX experience.
A typical architecture designed toward these solutions is essentially encapsulated around these critical components:
- Data processing layer: This layer encompasses a range of tasks, such as gathering data from various sources in multiple formats, cleaning and normalizing the data, performing preliminary processing, and subsequently storing it in databases.
- Cloud SaaS: Cloud is the most common platform, which not only provides ease of storage and access to data but also provides enhanced security at an optimized cost.
- Extraction models: Various data analytic models are used for unstructured data.
- AI/ML models: These are advanced algorithms designed using deep learning algorithms to generate business insights by bringing in predictive and prescriptive analytics.
- GenAI model layer: This generates new content using advanced LLM models, providing on-the-fly solutions for business problems.
- Integration layer: All the different solutions are integrated through an integrated solution (like MuleSoft, Informatica, etc.), depending upon the enterprise tech stack spread.
Take a typical business use case – the end client shares data in different formats. An RPA solution is deployed to not only collate/validate data, extract and transform it into a consumable format, but also to apply data analytics to generate meaningful insights through predictive AI/ML models along with enabling GenAI using LLM models to generate content for ready usage.
A thoughtful deployment strategy revolves around the following:
- Data injection by the client and posted on cloud/client-server. It has files in different formats, which have plain text as well as handwritten images.
- A gateway is required to connect the client and your ecosystem.
- UI is created to bring about an enhanced UI/UX experience.
There is a typical need to manage different work items and assign them to the team to ensure timely action to help with the same:
- Data is ingested into a UI (BPM workflow) tool to manage workflow effectively.
- RPA is used to extract, move, and compare data across multiple systems.
A database schema is designed for storing:
- Structured data directly fed from client systems.
- Unstructured data can be converted into a structured format using various text processing algorithms, such as AWS Textract.
For seamless exchange of information across systems, APIs are leveraged.
- The output of this data is consumed through APIs and streamed into queue systems (CI/CD) [or] AWS Queue (if you are using AWS as an underlying cloud solution)
- Parallelly, data is stored in a data lake (e.g., AWS S3 bucket).
- Likewise, ChatGPT is used as a GenAI platform for building customized solutions and helping solve different business use cases.
- These are being pushed into the Application queue, and downstream automation can post the validated data to UI for client consumption.
- Secure Shell (SSH) connection is established for deployment from one environment to another. At times, private/public keys are used to create secure connections.
- Varied Integration solutions provide seamless integration of various applications to make it a coherent solution.
Below is a sample of deployment steps:
Setting up front-end:
The Front-end web interface is designed using Python libraries to create interactive web applications with minimal code to help with easy integration:
- By calling the RPA code package.
- Using AI/ML models.
- Generative AI service-based application.
Deployment automation:
The entire deployment of the application can be automated using Terraform, an infrastructure as a code tool
RPA package creation:
- The package is created with the designed RPA bots.
- Queues are established.
- DB schema is developed along with folder creation
- Orchestrator is set up to capture Transaction count, SMTP server, TMHP portal, and credentials.
Setup code repositories:
- AWS Codecommit hosts the private GIT repositories.
- Use the code-commit repo link and encryption parameters are set up for HTTPS.
- The AWS Code pipeline automatically deploys code from the AWS Repository into the EC2 instance.
Setup CI/CD pipeline:
Use this CI/CD pipeline moving forward and execute the below steps:
- Establish Python dependencies by running install_dependencies.sh, followed by the configuration of files.
- Private and Public SSL keys are generated along with SSL ciphers to enhance the security of the SSH channel.
- The server starts, and the CI/CD pipeline pushes an update restart_service command.
Optimizing data structures:
Repeated read requests from DB to AI/ML model can be avoided using in-memory data structures [Redis (Remote Dictionary Server), compatible with multi-threading].
Backup setup:
Daily backups occur inside the Amazon Sagemaker notebook via ‘cron,’ using “crontab -e,” which automatically opens the “vi” editor.
Generative AI solution deployment:
- Configure parameters in a .tf file, which can be used by Terraform script to create necessary AWS resources.
- Amazon Bedrock and ECS Faregate cloud services are used to deploy the GenAI solution.
- Initialize the Terraform plan through the terminal and deploy the application.
The idea is to have a secured environment with encryption enabled so that all the systems can synchronize through an integrated platform from data injection with RPA to extraction, implementing data analytics through AI/ML coupled with GenAI content generation deployed on the cloud with a seamless automated deployment pipeline.
Add new comment