Microsoft, AWS and Google subject matter experts discuss Generative AI - including LLM Model Comparisons, GenAI Cost Management, RAG Data Architecture, NIST AI Risk Management Framework, AI-Generated Code/Programming, AI Chipsets, Responsible AI - plus CyberSecurity, Hybrid/Multi Cloud/AI Platforms, Data Analytics and more.

Angelbeat provides a unique opportunity to directly interact with, ask questions to, plus learn from, the world’s three leading Cloud/AI firms, in one focused program.

Scroll down to see the detailed agenda and speakers. Click on the name to view their Linkedin profile, and session title for additional information. CPE credit hours are provided.

While the event is designed for on-site RTP/RDU attendees, every presentation is also Livestreamed via Zoom, like a regular webinar, for remote/online viewers. Please use your Angelbeat account - created on the secure Memberspace platform - to register to attend by clicking the green button. You are automatically signed up for the Zoom LiveStream, and can then request to attend in person (limited to 40 individuals), which includes lunch plus priority for scheduling one-on-one technical briefings with AWS, Microsoft or Google.

Date/Time: November 14, 9 am ET until Noon

Location: Hyatt House Raleigh/RDU/Brier Creek, 10030 Sellona Street, Raleigh, NC, 27617. Free parking and WiFi, easy access off major roadways.


Speakers, Topics, Agenda

9 am ET Ron Gerber, CEO Angelbeat

Angelbeat and Event Overview

NIST AI Risk Management Framework (RMF)

Ron’s summarizes the day’s agenda, plus outlines the importance of creating an AI Risk Management Framework (RMF) based on NIST standards.

NIST recently released NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. Developed in part to fulfill a Presidential Executive Order, the profile  can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. 


9:20 Aastha Bhardwaj, Senior Security Specialist, AWS

Security Primer for GenAI

This highly relevant session will focus on these three topics:

  • 3 Ways to Strategically and Tactically Consider GenAI Security

  • How to Leverage the Generative AI Security Scoping Matrix

  • Considerations for Securing Different Types of Generative AI Workloads

The Generative AI Security Scoping Matrix provides a common language and best practices for securing GenAI solutions. You’ll leave the session with a framework and techniques that can be leveraged to support responsible adoption of Generative AI solutions, while enabling the business to move at an ever-increasing pace.


10:00 Jamie Duncan, Application Transformation Customer Engineer, Google

Context-Aware AI Code Generation: Retrieval Augmentation (RAG) and Vertex AI Codey APIs

AI code generation is the use of artificial intelligence (AI) and machine learning (ML) to create code based on a user’s conversational prompt. For example, Gemini Code Assist offers developer code generation and completion capabilities.

Code can be generated based on general best practices, organizational governance, for development tasks in programming languages like Python, JavaScript, Prolog and Fortran, or even a natural language description of the desired code with Verilog.

Retrieval augmented generation, or RAG, is a way to use external data or information to improve the accuracy of large language models (LLMs). Jamie will explore how to use RAG to improve the output quality of Google Cloud AI models for code completion and generation on Vertex AI using its Codey APIs, a suite of code generation models that can help software developers complete coding tasks faster. There are three Codey APIs that help boost developer productivity:

  • Code completion: Get instant code suggestions based on your current context, making coding a seamless and efficient experience. This API is designed to be integrated into IDEs, editors, and other applications to provide low-latency code autocompletion suggestions as you write code.

  • Code generation: Generate code snippets for functions, classes, and more in seconds by describing the code you need in natural language. This API can be helpful when you need to write a lot of code quickly or when you're not sure how to start. It can be integrated into IDEs, editors, and other applications including CI/CD workflows.

  • Code chat: Get help on your coding journey throughout the software development lifecycle, from debugging tricky issues to expanding your knowledge with insightful suggestions and answers. This multi-turn chat API can be integrated into IDEs, and editors as a chat assistant. It can also be used in batch workflows.


10:40 John Weidenhammer, Senior Systems Engineer, Cloudbrink

Fast, Secure Connectivity for Users Everywhere so You Can Accelerate Your Business and Eliminate VPNs that often cause poor productivity in the remote workforce.

Cloudbrink is purpose-built to deliver the industry’s highest performance connectivity to remote and hybrid workers, anywhere in the world. With an all-software solution, and leveraging a highly secure zero-trust model, Cloudbrink drives accelerated performance for SaaS, UCaaS and datacenter apps. Its approach is a radical shift in the market, boosting productivity for end users and vastly simplifying the lives of network, security and IT administrators.


11:00 Andrew Thomas, AI/ML Technical Specialist, Microsoft

Microsoft OpenAI Partnership, GenAI Cost Management, LLM Model Comparison, Responsible AI, AI Chipsets

Andrew will discuss these highly relevant GenAI initiatives from Microsoft.

Azure OpenAI Service/Partnership and o1 Models; New Advanced Capabilities and Business Applications:

In September, OpenAI introduced a groundbreaking family of models known as o1, often referred to as "Ph.D. models" due to their advanced capabilities. Now accessible through Azure OpenAI Service, o1 represents a significant leap in artificial intelligence, particularly in reasoning and problem-solving tasks. We've seen the o1 models solve problems like counting the number of R's in the word "strawberry" and logic problems - and learn how can this be applied for businesses.

Strategies and Tactical Plans to Estimate, then Manage Costs for Azure OpenAI Services

Use the Azure Pricing Calculator to estimate costs for OpenAI-powered GenAI applications; for instance, Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Then as you deploy Azure resources, leverage ongoing cost management tools to set budgets and monitor costs.

Empowering Responsible AI Practices

Understand Microsoft’s commitment to Responsible AI and the advancement of AI driven by ethical principles.

Azure AI Chipsets and Infrastructure

Compare different chipset/infrastructure options for Azure ND Series GPU VMs (Nvidia H100, AMD MI300X, Intel Xeon), then decide which platform will deliver and optimize the performance of your most compute-intensive AI workloads.

LLM Comparative Analysis using Model as a Service (MaaS) Architecture

Learn how the Azure AI Inference connector empowers you to experiment with a broader range of models hosted on Azure within your applications. For example, see how you can test/evaluate three models against the widely recognized Measuring Massive Multitask Language Understanding (MMLU) dataset and produce benchmarking results.