Elynox & Justrite/Hughes Safety | AI-Augmented Workflow Workshop FAQ
Thank you for your active participation and insightful questions during our AI-Augmented Workflow workshop. To support your team’s continued learning and exploration, we’ve compiled the key questions raised during our session into this Frequently Asked Questions (FAQ) document.
These questions are organized by theme to provide a clear, easy-to-reference guide that builds upon the concepts we discussed.
1. Getting Started & Access
Question: How do we access the tools and materials for the workshop and future demos?
Answer: For the workshop, we used a dedicated microsite and a demo platform. Access to the microsite requires a one-time sign-up using your company email address (@justrite.com or @hughes-safety.com). For other platforms, like the final end-to-end demo, specific login credentials will be provided. This highlights a key point: different tools have different access methods, and ensuring smooth, secure access is always the first step in any workflow.
2. Understanding the Core Concepts
Question: This is all new to us. How do we translate our business process into something the AI can understand, like a ‘JSON contract’? How would we even know where to start?
Answer: This is a fantastic and central question. The goal is not for you to become programmers, but to make your expert knowledge explicit so the AI can act on it.
A JSON contract is simply a machine-readable template or a digital checklist. It tells the AI what information to look for and how to structure its response.
You don’t need to know how to write one from scratch. The process starts with your expertise:
- We begin by discussing your process, just as we did in our discovery sessions.
- We identify the key steps, required information, and desired outputs.
- Using that conversation and existing examples (like the ones from the workshop), we can instruct an AI to help us generate the initial
JSON contract.
Think of it as co-creating a detailed job description for your digital assistant. You provide the “what,” and we use the tools to create the “how.”
3. The Building Process & Tools
Question: What is the step-by-step process for building and running these agents in our own tools, like Copilot or ChatGPT?
Answer: The hands-on portion of our workshop simulated the core process, which can be broken down into these general steps:
- Create an Agent: In your tool (Copilot or ChatGPT), you start by creating a new agent and giving it a name, description, and a set of core Instructions that define its role and guardrails.
- Provide Knowledge: You upload the “knowledge” files the agent needs to perform its job, such as the
JSON contractsor other templates. (We noted some organizational permissions may restrict file types, which is a common IT setting). - Start the Agent: You initiate the process with a kickoff prompt, which tells the agent to prepare for its task.
- Provide Project Files: You then provide the specific files for the task at hand (e.g., the RFQ documents for a particular project).
- Chain the Agents: In our workshop, we simulated the “chain” by manually copying the output from one agent and pasting it as the input for the next. In a fully deployed production system, this “daisy chain” would be automated, with agents handing off work to each other seamlessly.
4. System Capabilities & Integration
Question: Can the AI system access external, real-time information, like new EHS regulations or web standards?
Answer: Yes, absolutely. While we intentionally turned web search off for most of our demo to ensure the AI only used the provided documents, we can create specialized agents with this capability. A best practice is to have a dedicated “Researcher Agent” in the chain whose specific job is to perform web searches for up-to-date information. This gives us precise control over when and how external data is introduced into the workflow, ensuring auditability and reliability.
Question: How does the system integrate with our other enterprise software, like SAGE, to look up a Bill of Materials (BOM)? Does the AI need to be taught our logins?
Answer: This is a key aspect of moving from a demo to a production system. We can give agents “skills” (or “actions”) to interact with other software.
- A “skill” would be created to teach the agent how to query SAGE for BOM data.
- Regarding logins, modern and secure integration methods are used. The AI agent would be granted secure, programmatic access through an API, not by being “taught” a user’s personal password. This ensures the connection is secure, auditable, and respects your organization’s IT policies.
5. Testing & Practical Application
Question: This is great, but can we use this system to test our own, real-world RFQ files and compare the results?
Answer: Yes. This is the most important next step to validate the system’s value for your team. As Simon astutely observed, the system can be used in two primary modes:
- Upfront Collaborator: You provide the initial RFQ files and the system helps you analyze them, extract requirements, and generate the foundational elements of your quotation before you’ve done the bulk of the work.
- Quality Auditor: After you’ve completed a quotation, you can have the system review your work against the original RFQ package to check for gaps, discrepancies, or missed requirements.
Testing with your own files is the best way to refine the process and build confidence in the results.
We hope this FAQ is a valuable resource. Please don’t hesitate to reach out with any further questions as you continue to reflect on the workshop.