Artificial Intelligence Policy
AI Usage Policy
CORE PRINCIPLE: AI suggests, you decide. ShelfGRC uses artificial intelligence to assist and enhance your GRC workflows, but you always maintain full control and final decision-making authority.
1. Introduction
This AI Usage Policy explains how ShelfGRC uses artificial intelligence (AI) and machine learning (ML) technologies, what data is used to power AI features, how AI-generated content is handled, and your rights and controls over AI functionality.
This policy should be read in conjunction with our Terms and Conditions and Privacy Policy.
2. Our AI Philosophy: Propose-Only Workflow
2.1 Human-in-the-Loop Approach
ShelfGRC is built on a "propose-only" AI workflow, also known as "human-in-the-loop" (HITL). This means:
- AI Proposes: Our AI systems analyze your data and generate suggestions, recommendations, and draft content
- You Decide: All AI-generated proposals require explicit human review and approval before being implemented
- No Automatic Actions: AI never makes final decisions or takes actions on your behalf without your approval
- Full Transparency: All AI-generated content is clearly labeled and includes citations or reasoning where applicable
2.2 Why Propose-Only?
Governance, risk, and compliance decisions have significant business and legal implications. While AI can accelerate analysis and provide valuable insights, human judgment, context, and accountability are essential. Our propose-only approach ensures:
- You retain full control and responsibility for GRC decisions
- AI serves as an intelligent assistant, not a replacement for human expertise
- Compliance with regulatory requirements for human oversight
- Accountability and auditability of all decisions
3. How We Use AI in ShelfGRC
3.1 AI-Powered Features
ShelfGRC uses AI to provide the following features:
Risk Assessment Assistance
- Suggest risk ratings based on historical data and industry benchmarks
- Identify potential risks based on your business context
- Recommend risk treatment strategies
- Highlight emerging risk trends
Control Recommendations
- Suggest appropriate controls for identified risks
- Map controls to compliance frameworks (ISO 27001, SOC 2, NIST, etc.)
- Recommend control effectiveness testing procedures
- Identify control gaps and redundancies
Compliance Mapping
- Map your controls and processes to regulatory requirements
- Identify compliance obligations based on your industry and jurisdiction
- Suggest evidence to demonstrate compliance
- Generate compliance status summaries
Evidence Analysis
- Analyze uploaded documents and extract relevant compliance information
- Suggest evidence categorization and tagging
- Identify missing or outdated evidence
- Generate evidence summaries
Incident Response
- Suggest incident severity classifications
- Recommend response actions based on incident type
- Identify related risks and controls
- Generate incident reports and timelines
Report Generation
- Generate draft executive summaries and board reports
- Create compliance status reports
- Produce risk heatmaps and trend analyses
- Suggest key performance indicators (KPIs)
Data Ingestion and Mapping
- Analyze CSV/Excel uploads and suggest field mappings
- Categorize imported data using our risk taxonomy
- Identify data quality issues and suggest corrections
- Map external data to ShelfGRC data structures
3.2 AI Technologies We Use
ShelfGRC leverages the following AI technologies:
- Large Language Models (LLMs): OpenAI GPT-4, Anthropic Claude, or similar for natural language understanding and generation
- Machine Learning Models: Custom-trained models for risk scoring, classification, and pattern recognition
- Natural Language Processing (NLP): For document analysis, evidence extraction, and semantic search
- Vector Embeddings: For similarity search and intelligent content retrieval
4. Data Used for AI Processing
4.1 Your Customer Data
To provide AI-powered features, we process your Customer Data, including:
- Risk assessments and risk register entries
- Control descriptions and effectiveness data
- Evidence documents and metadata
- Compliance obligations and framework mappings
- Incident reports and response actions
- Organizational context (industry, size, jurisdiction)
4.2 Data Isolation and Privacy
Important: Your Customer Data is processed in isolation and is never used to train AI models that serve other customers. Specifically:
- Your data remains within your tenant boundary (multi-tenant isolation)
- AI processing is performed on a per-tenant basis
- We do not create shared AI models trained on multiple customers' data
- Your data is not shared with other ShelfGRC users
4.3 Third-Party AI Providers
We use third-party AI providers (e.g., OpenAI, Anthropic) to power certain AI features. When using these services:
- We send only the minimum necessary data to generate AI responses
- We use enterprise agreements with data processing addendums (DPAs)
- We configure providers to not use your data for training their models
- Data is transmitted securely using encryption (TLS 1.3)
- We do not send personally identifiable information (PII) unless necessary for the specific AI task
5. AI Accuracy and Limitations
5.1 AI Is Not Perfect
While our AI systems are designed to be helpful and accurate, they have limitations:
- Errors and Inaccuracies: AI-generated content may contain factual errors, outdated information, or incorrect recommendations
- Hallucinations: AI may generate plausible-sounding but incorrect or fabricated information
- Bias: AI models may reflect biases present in training data
- Context Limitations: AI may not fully understand your unique business context or nuanced requirements
- Regulatory Changes: AI recommendations may not reflect the latest regulatory updates
5.2 Your Responsibility to Review
You are solely responsible for reviewing, validating, and approving all AI-generated content before using it. This includes:
- Verifying the accuracy of AI suggestions
- Ensuring compliance with applicable laws and regulations
- Adapting AI recommendations to your specific context
- Consulting with legal, compliance, or other professional advisors as needed
5.3 No Liability for AI Errors
As stated in our Terms and Conditions, we are not liable for decisions made based on AI-generated content. AI proposals are advisory only and do not constitute professional advice.
6. Transparency and Explainability
6.1 Clear Labeling
All AI-generated content in ShelfGRC is clearly labeled with indicators such as:
- "AI Suggested" or "AI Generated" badges
- Distinct visual styling (e.g., colored borders, icons)
- Timestamps showing when AI content was generated
6.2 Reasoning and Citations
Where applicable, AI proposals include:
- Reasoning: Explanation of why the AI made a particular suggestion
- Citations: References to source data, frameworks, or regulations
- Confidence Scores: Indicators of AI confidence in its suggestions
6.3 Audit Trail
All AI proposals and your approval/rejection decisions are logged in an audit trail, including:
- What AI content was generated
- When it was generated
- Who reviewed and approved/rejected it
- Any modifications made before approval
7. Your Control Over AI Features
7.1 Opt-Out Options
You have control over AI features in ShelfGRC:
- Feature-Level Control: Enable or disable specific AI features in your account settings
- Proposal Review: Accept, reject, or modify any AI proposal
- Feedback: Provide feedback on AI suggestions to help us improve
7.2 Data Deletion
If you delete Customer Data from ShelfGRC, it is also removed from AI processing pipelines. See our Privacy Policy for data retention details.
8. AI Model Training and Improvement
8.1 Aggregated Analytics
We may use aggregated, anonymized usage data to improve AI features, such as:
- Which types of AI suggestions are most frequently accepted or rejected
- Common patterns in risk assessments across industries (anonymized)
- Performance metrics (response time, accuracy)
This aggregated data does not contain personally identifiable information or specific Customer Data.
8.2 Opt-Out of Analytics
You may opt out of contributing to aggregated analytics in your account settings. This will not affect the functionality of AI features.
9. AI Security and Safety
9.1 Prompt Injection Protection
We implement safeguards to prevent malicious manipulation of AI systems, including:
- Input validation and sanitization
- Prompt injection detection
- Output filtering and safety checks
9.2 Content Moderation
AI-generated content is filtered to prevent:
- Harmful, offensive, or inappropriate content
- Disclosure of sensitive information
- Generation of malicious code or instructions
9.3 Monitoring and Incident Response
We monitor AI systems for anomalies, errors, and security issues. If we detect a problem affecting AI accuracy or safety, we will:
- Investigate and remediate the issue promptly
- Notify affected users if necessary
- Implement corrective measures to prevent recurrence
10. Compliance with AI Regulations
We are committed to compliance with emerging AI regulations, including:
- EU AI Act: Classification of AI systems and compliance with transparency and risk management requirements
- NIST AI Risk Management Framework: Alignment with trustworthy AI principles (valid, reliable, safe, secure, resilient, accountable, transparent, explainable, privacy-enhanced, fair)
- Australian AI Ethics Principles: Human-centered values, fairness, privacy protection, reliability, transparency, contestability, accountability
11. Feedback and Reporting Issues
11.1 Report AI Errors
If you encounter AI-generated content that is:
- Factually incorrect or misleading
- Biased or discriminatory
- Inappropriate or harmful
- Not functioning as expected
Please report it to us at contact@shelflabs.io. Include details about the issue and, if possible, a screenshot or description of the AI output.
11.2 Continuous Improvement
Your feedback helps us improve AI accuracy, safety, and usefulness. We review all feedback and use it to refine our AI systems.
12. Changes to This AI Usage Policy
We may update this AI Usage Policy as we introduce new AI features or in response to regulatory changes. We will notify you of material changes via email or in-app notification at least 30 days before changes take effect.
13. Contact Us
If you have questions, concerns, or feedback about our use of AI, please contact us:
Related Legal Documents