The Call For Papers AI team takes security seriously. We appreciate your efforts to responsibly disclose any security vulnerabilities you find.
Please DO NOT report security vulnerabilities through public GitHub issues.
Instead, please report security vulnerabilities by:
-
Creating a private security advisory on GitHub:
- Go to the Security tab of this repository
- Click "Report a vulnerability"
- Fill out the advisory form with details
-
Email us directly (when available):
- Send an email to
security@callforpapers-ai.org - Include "SECURITY" in the subject line
- Provide detailed information about the vulnerability
- Send an email to
When reporting a security vulnerability, please include:
- Type of vulnerability: What kind of security issue is this?
- Affected components: Which parts of the system are affected?
- Attack scenario: How could this be exploited?
- Impact assessment: What could an attacker achieve?
- Steps to reproduce: Detailed steps to reproduce the vulnerability
- Proof of concept: Code, screenshots, or other evidence (if applicable)
- Suggested fix: Any ideas for how to resolve the issue
- Your contact information: How we can reach you for follow-up
We're particularly interested in vulnerabilities related to:
- Authentication bypass: Unauthorized access to systems or data
- Authorization flaws: Privilege escalation or access control issues
- Data exposure: Sensitive information disclosure
- Injection attacks: SQL injection, XSS, command injection, etc.
- API security: Insecure API endpoints or improper validation
- Infrastructure security: Server misconfigurations or vulnerabilities
- AI/ML security: Model poisoning, adversarial attacks, or data leakage
We will acknowledge your report within:
- 48 hours: Initial acknowledgment of your report
- 7 days: Preliminary assessment and severity classification
- 30 days: Regular updates on investigation progress
- 90 days: Target resolution or mitigation
- Call For Papers AI web applications and APIs
- Related infrastructure and services
- Mobile applications (if applicable)
- AI models and training systems
- Third-party integrations we control
- Third-party services we don't control
- Social engineering attacks
- Physical security issues
- Denial of service attacks
- Issues in dependencies (report to the dependency maintainers)
- Issues requiring physical access to devices
We follow responsible disclosure practices:
- Investigation: We investigate and validate the report
- Fix development: We develop and test a fix
- Coordination: We coordinate the disclosure timeline with you
- Public disclosure: We publicly disclose after the fix is deployed
To help keep our community secure:
- Keep your software and dependencies up to date
- Use strong, unique passwords
- Enable two-factor authentication when available
- Be cautious with suspicious links or attachments
- Report security concerns promptly
- Follow secure coding practices
- Review code for security issues
- Use dependency scanning tools
- Keep dependencies updated
- Follow our security guidelines in CONTRIBUTING.md
When investigating security issues:
- Don't access data that doesn't belong to you
- Don't disrupt services or systems
- Don't social engineer our staff or users
- Don't publicly disclose vulnerabilities before we've addressed them
- Don't spam our security reporting channels
We appreciate security researchers who help us improve our security:
- Acknowledgment: We'll thank you in our security advisories (if desired)
- Recognition: We may recognize significant contributions publicly
- Communication: We'll keep you updated on our progress
- Feedback: We welcome your feedback on our security practices
If you have questions about our security policy:
- Review our Contributing Guidelines
- Create a discussion issue (for general security questions)
- Contact us through the appropriate security channels
This security policy may be updated periodically. We will:
- Notify the community of significant changes
- Version our policy for transparency
- Archive previous versions for reference
Last Updated: January 2025 Version: 1.0
Thank you for helping us keep Call For Papers AI secure! π