Testing Guide
This guide covers testing practices and procedures for FlavumHive development.
Testing Philosophy
Our testing approach focuses on:
- Comprehensive test coverage
- Test-driven development
- Integration testing
- Automated testing
- Performance testing
Test Structure
Unit Tests
def test_twitter_post():
"""Test Twitter posting functionality."""
handler = TwitterHandler(config)
result = handler.post_tweet("Test message")
assert result.success
assert result.tweet_id is not None
Integration Tests
@pytest.mark.integration
def test_platform_interaction():
"""Test cross-platform interaction."""
twitter = TwitterHandler(config)
reddit = RedditHandler(config)
# Post content
tweet = twitter.post_tweet("Test")
reddit_post = reddit.create_post(
"test", "Title", "Content"
)
assert tweet.success
assert reddit_post.success
Mock Tests
def test_rate_limiting():
"""Test rate limiting with mocks."""
with patch('flavumhive.RateLimiter') as mock:
mock.check_limit.return_value = False
handler = TwitterHandler(config)
result = handler.post_tweet("Test")
assert not result.success
Running Tests
Basic Usage
# Run all tests
pytest
# Run specific test file
pytest tests/test_twitter.py
# Run specific test
pytest tests/test_twitter.py::test_post_tweet
Test Options
# Run with coverage
pytest --cov=flavumhive tests/
# Generate coverage report
pytest --cov=flavumhive --cov-report=html tests/
# Run marked tests
pytest -m "integration"
Writing Tests
Test File Structure
"""Tests for Twitter handler functionality."""
import pytest
from flavumhive import TwitterHandler
@pytest.fixture
def twitter_handler():
"""Create Twitter handler fixture."""
config = load_test_config()
return TwitterHandler(config)
def test_feature(twitter_handler):
"""Test specific feature."""
result = twitter_handler.some_method()
assert result.success
Test Categories
-
Unit Tests
- Single component
- Isolated functionality
- Mock dependencies
- Quick execution
-
Integration Tests
- Multiple components
- Real dependencies
- End-to-end flows
- Slower execution
-
Performance Tests
- Load testing
- Stress testing
- Benchmarking
- Resource usage
Best Practices
Test Design
- Test one thing per test
- Use descriptive names
- Include edge cases
- Test error conditions
Test Implementation
- Use fixtures
- Mock external services
- Clean up resources
- Handle async properly
Test Maintenance
- Keep tests updated
- Remove obsolete tests
- Refactor when needed
- Document changes
Common Patterns
Setup and Teardown
@pytest.fixture(autouse=True)
def setup_teardown():
"""Setup and teardown for tests."""
# Setup
init_test_db()
yield
# Teardown
cleanup_test_db()
Parametrized Tests
@pytest.mark.parametrize("input,expected", [
("test1", True),
("test2", False),
("test3", True)
])
def test_validation(input, expected):
"""Test input validation."""
result = validate_input(input)
assert result == expected
Debugging Tests
Debug Tools
# Run with debug info
pytest -vv
# Run with print output
pytest -s
# Debug on error
pytest --pdb
Common Issues
-
Resource Cleanup
- Close connections
- Remove test files
- Reset state
-
Async Testing
- Use async fixtures
- Handle coroutines
- Manage event loop
-
Mock Issues
- Verify call counts
- Check arguments
- Reset mocks