Managing Bandit False Positives Without Breaking Security
Six months after I adopted Bandit, I realized we had a false positive problem. Not because the tool was broken, but because we’d gotten sloppy about how we handled legitimate security warnings. We were adding # nosec comments everywhere, effectively turning off security scanning one line at a time.
The wake-up call came during a review when we discovered that some # nosec comments were suppressing real vulnerabilities, not false positives. We’d trained ourselves to ignore Bandit’s warnings instead of understanding them. It was a sobering reminder that security tools are only as good as the discipline with which you use them.
The Seductive Danger of False Positive Fatigue
False positives are the enemy of effective security scanning. When developers see too many irrelevant warnings, they start ignoring all warnings, including the important ones. It’s a classic case of crying wolf, if the security tool flags innocent code as dangerous too often, people stop paying attention when it finds real problems.
But the solution isn’t to silence every warning that seems inconvenient. The solution is to understand why Bandit is flagging specific code and make thoughtful decisions about whether those warnings add value to your security posture.
Understanding Bandit’s Perspective
Bandit operates under the assumption that any code pattern associated with historical security vulnerabilities deserves scrutiny. This approach is conservative by design, it would rather flag innocent code than miss a real vulnerability.
When Bandit flags subprocess.call() usage, it’s not saying that subprocess calls are inherently evil. It’s saying that subprocess calls have been involved in security vulnerabilities often enough that they warrant careful review. Sometimes that review confirms the code is safe; sometimes it reveals real security issues.
The key is treating each finding as a conversation starter rather than a binary pass/fail test. Bandit is asking: “Are you sure this code is safe? Have you considered how it might be abused? Is there a more secure way to accomplish the same thing?”
True False Positives vs. Acceptable Risk
Not every Bandit warning represents a security vulnerability, but not every non-vulnerability is a false positive either. Some warnings highlight code that’s technically secure in its current context but could become vulnerable if circumstances change.
# Test file - legitimate use of hardcoded "password"
def test_login():
# This triggers B105 but isn't a real security issue
test_password = "fake_password_for_testing" # nosec B105
result = authenticate_user("testuser", test_password)
assert result.success
# Utility script - legitimate subprocess usage
def deploy_staging():
# B602: subprocess with shell=True, but controlled environment
subprocess.call("rsync -av ./dist/ staging:/var/www/", shell=True) # nosec B602
For example, hardcoded passwords in test files aren’t usually security vulnerabilities, they’re test data that doesn’t protect real resources. But flagging them serves a valuable purpose: it reminds developers that hardcoding secrets is generally a bad practice and helps prevent real passwords from accidentally ending up in test files.
The Art of Thoughtful Suppression
When you encounter a Bandit warning that doesn’t represent a genuine security risk in your context, you have several options for handling it. The least disciplined approach is adding a blanket # nosec comment, which silences the warning without documenting why it’s safe.
Better approaches involve documenting your reasoning and making suppression decisions at the appropriate granularity. If specific test files legitimately need to use hardcoded passwords, exclude those files from password detection rules rather than adding # nosec comments throughout the codebase.
Some teams implement code review requirements for security suppressions, ensuring that decisions to ignore Bandit warnings get the same scrutiny as other security-relevant code changes.
Configuration-Based Exclusions
Bandit’s configuration system provides more surgical alternatives to suppression comments. Instead of sprinkling # nosec throughout your codebase, you can configure Bandit to skip specific rules in specific contexts.
# .bandit configuration
exclude_dirs:
- tests/fixtures
- deployment/scripts
skips:
- B101 # Skip assert usage warnings
# Rule-specific exclusions
assert_used:
exclude: ['tests/*']
You might exclude certain directories from password detection (like test fixtures), disable subprocess warnings in deployment scripts, or adjust severity levels for findings that represent lower risk in your environment. These configuration-based approaches centralize suppression decisions and make them easier to review and update.
The advantage of configuration-based exclusions is that they’re explicit and visible. When someone reviews your Bandit configuration, they can understand what types of warnings your team considers acceptable and why.
Building Team Guidelines
Effective false positive management requires clear team guidelines about when and how to suppress security warnings. These guidelines should address both the technical mechanics (when to use # nosec vs. configuration exclusions) and the decision-making process (who can approve suppressions, what documentation is required).
Some teams adopt policies like “no suppressions without code review” or “all suppressions must include justification comments.” Others create escalation processes where developers can suppress low-severity findings independently but need security team approval for high-severity suppressions.
The specific policies matter less than having policies at all. What kills security tool adoption is inconsistency - when some developers suppress warnings liberally while others agonize over every finding, you end up with an uneven security posture and team frustration.
The Documentation Imperative
Every security suppression should include documentation explaining why the warning doesn’t represent a genuine security risk. This documentation serves multiple purposes: it forces the person making the suppression to think through their reasoning, it provides context for future code reviewers, and it creates a record of security decisions that can be revisited if circumstances change.
Good suppression documentation answers three questions: What security risk is Bandit detecting? Why doesn’t this risk apply in this context? What would need to change for this to become a real security issue?
Learning from Suppression Patterns
Over time, patterns in your security suppressions reveal opportunities to improve both your codebase and your Bandit configuration. If you’re suppressing the same type of warning repeatedly, that might indicate a need for better configuration rather than more suppression comments.
For example, if you’re constantly suppressing hardcoded password warnings in test files, consider restructuring your test data management or configuring Bandit to exclude test directories from password detection. If you’re suppressing subprocess warnings in legitimate administrative scripts, consider whether those scripts could be moved to a separate directory with relaxed security rules.
Building Security Intuition
The most valuable outcome of thoughtful false positive management is building team intuition about security risks. When developers regularly engage with Bandit findings and make reasoned decisions about suppression, they internalize security principles that influence how they write new code.
Developers who understand why certain code patterns trigger security warnings are less likely to write vulnerable code in the first place. They start thinking about security implications during initial development rather than only when security tools flag problems.
The Long Game
False positive management is really about building organizational security maturity. Teams that handle false positives thoughtfully develop better security judgment, create more maintainable security configurations, and build sustainable relationships between security requirements and development velocity.
Some level of over-detection is inherent in automated security scanning. The goal is to handle false positives in ways that strengthen rather than weaken your overall security posture. When done well, false positive management becomes a vehicle for security education and team alignment rather than just a necessary evil of using security tools.
Stay in the loop
Get notified when I publish new posts. No spam, unsubscribe anytime.