Configure S3 backend with DynamoDB table for locking using backend configuration block. The DynamoDB table must have a primary key named 'LockID'. This prevents state corruption in team environments by ensuring only one terraform operation runs at a time.
Devops Iac FAQ & Answers
24 expert Devops Iac answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
24 questionsSet 'use_lockfile = true' in S3 backend configuration (Terraform 1.10+, GA in 1.11+). S3 uses conditional writes with If-None-Match header to create .tflock files, eliminating DynamoDB dependency. Requires s3:GetObject, s3:PutObject, s3:DeleteObject permissions on lock file path. Example: backend "s3" { bucket = "mybucket"; key = "terraform.tfstate"; use_lockfile = true; encrypt = true }. DynamoDB locking is now deprecated.
Create action.yml in .github/actions/ with 'runs: using: composite' and 'runs.steps[]' array. Each step needs 'shell' parameter (bash, pwsh, etc). Use 'inputs:' for parameters, 'outputs:' for return values. Best for shared task templates like docker build, terraform plan, security scanning. Example: runs: using: composite; steps: - run: terraform plan; shell: bash. Store in same repo or publish to marketplace for cross-repo use.
Use 'uses: org/repo/.github/workflows/workflow.yml@ref' at job level (not step level). Define workflow with 'on: workflow_call' trigger. Pass parameters via 'with:', access outputs via 'needs.job_id.outputs'. Best for pipeline templates with standardized CI/CD patterns, compliance checks, security scanning. Example: jobs: deploy: uses: org/infra/.github/workflows/deploy.yml@v1; with: environment: prod. Cannot mix with regular steps in same job.
Use 'when:' conditionals with 'stat' module or 'creates/removes' parameters to check state before running commands. Register variables and use 'changed_when: false' to control idempotency detection for commands that don't properly report changes.
Define handlers in 'handlers:' section, notify from tasks using 'notify: handler_name'. Handlers execute once at play end regardless of notification count, ensuring services restart only when needed. Use 'meta: flush_handlers' to force immediate execution. Best practices: implement conditionals (when:), use check_mode for validation, monitor handler invocations to optimize. Example: handlers: - name: restart nginx; service: name=nginx state=restarted. Reduces deployment time up to 30% via proper change detection.
Run 'terraform force-unlock LOCK_ID' only after verifying: (1) no team members running terraform operations via team chat, (2) check lock timestamp in DynamoDB/S3 lock file, (3) verify process truly crashed (not just slow). Get LOCK_ID from error message. Use as last resort when automatic unlock fails due to crashes, network issues, or CI/CD failures. Always document force-unlock events in team logs for audit trail.
Configure backend with 'encrypt = true' for S3, 'sse_algorithm = 'AES256'' for server-side encryption, and ensure HTTPS endpoints. Always enable encryption since state files contain sensitive data like passwords, API keys, and infrastructure details.
Define 'strategy: matrix:' with variable arrays (os: [ubuntu-latest, windows-latest], node: [18, 20, 22]). Reference via '${{ matrix.os }}' in steps. Use 'include:' to add specific combinations (experimental configs), 'exclude:' to skip invalid combinations. Add 'fail-fast: false' to run all combinations even if one fails. Matrix generates N×M jobs running in parallel, ideal for cross-platform testing, multiple language versions, or multi-cloud deployments.
Upload with 'actions/upload-artifact@v4' specifying name and path, download with 'actions/download-artifact@v4'. Use 'needs:' keyword to declare job dependencies ensuring upload completes before download. Set 'retention-days:' (1-90 days, free tier) to manage storage costs. Example: upload-artifact: name: build-output; path: dist/. Artifacts persist build outputs (binaries, test results, logs) for sharing between pipeline stages or external download via UI.
Use 'cache:' with 'key:' (e.g., '$CI_COMMIT_REF_SLUG') and 'paths:' (node_modules/, .npm/, vendor/). Set 'policy: pull' in consumer jobs, 'policy: push' in producer-only jobs to save time. Use shared runner cache for multi-server setups. Split into multiple caches when suitable. Example: cache: key: $CI_COMMIT_REF_SLUG; paths: [node_modules/]. Reduces pipeline duration up to 70% via dependency caching across runs.
Set 'retention-days:' in upload-artifact@v4 action (1-90 days for free/team, up to 400 for enterprise). Example: upload-artifact: name: logs; path: logs/; retention-days: 7. Configure organization-level default in Settings > Actions > General. Use shorter retention (7 days) for CI logs, longer (90 days) for release artifacts. Automatically balances artifact preservation with storage cost management. Artifacts auto-delete after retention period expires.
Run 'ansible-galaxy collection install namespace.collection_name' for individual collections (e.g., community.docker, amazon.aws). For batch installation, create requirements.yml with collections list, run 'ansible-galaxy collection install -r requirements.yml'. Collections bundle modules, roles, plugins for specific platforms (AWS, Azure, Kubernetes). Install to ~/.ansible/collections or custom path via -p flag. Use 'ansible-galaxy collection list' to verify installed collections.
Create role directory with standard structure: tasks/main.yml (core logic), handlers/main.yml (service restarts), templates/ (Jinja2 configs), files/ (static files), vars/main.yml (role variables), defaults/main.yml (default values), meta/main.yml (dependencies, Galaxy metadata). Use 'ansible-galaxy init role_name' to scaffold. Roles encapsulate complete configuration for services (nginx, postgresql, docker) promoting reusability across playbooks and environments. Store in roles/ directory or Galaxy.
Create workspaces with 'terraform workspace new dev/prod/staging', switch with 'terraform workspace select dev'. Each workspace maintains separate state file enabling multiple environments from single configuration. Reference current workspace in code via 'terraform.workspace'. Best practice: manage small blast radius, group logically-related resources only. Use terraform.workspace for conditional resource naming (e.g., "${terraform.workspace}-app"). Store env-specific variables in dev.tfvars, prod.tfvars. List workspaces with 'terraform workspace list'.
Set 'permissions:' at workflow or job level with specific scopes (contents, issues, pull-requests, packages, etc). Use least privilege: 'contents: read' for checkout, 'pull-requests: write' for PR comments, 'packages: write' for publishing. Set 'permissions: {}' to disable all. Example: permissions: contents: read; pull-requests: write. GITHUB_TOKEN auto-generated per job, expires after workflow completes, eliminating need for PAT management. Override repository defaults for security-critical workflows.
Use 'needs:' keyword to explicitly define job dependencies instead of stage-based sequential execution. Example: test-frontend: needs: [build-frontend]; test-backend: needs: [build-backend]. Independent jobs run in parallel immediately after dependencies complete, bypassing stage barriers. DAG (Directed Acyclic Graph) reduces pipeline duration by 30-70% by eliminating unnecessary waiting. GitLab visualizes DAG relationships in pipeline UI. Combine with cache optimization for maximum speedup.
Declare 'data "resource_type" "name" { }' blocks to query existing infrastructure read-only. Use for VPC IDs, AMI IDs, account info, DNS zones not managed by current Terraform config. Example: data "aws_vpc" "existing" { default = true }; resource "aws_subnet" "app" { vpc_id = data.aws_vpc.existing.id }. Place data sources near resources referencing them. For cross-workspace sharing, use terraform_remote_state data source. Data sources refresh on every plan/apply.
Use 'ansible-vault encrypt file.yml' to encrypt entire files or 'ansible-vault encrypt_string' for inline variables. Create vault passwords file or use --ask-vault-pass for interactive decryption. Vault secures sensitive data like database credentials or API keys.
Run 'terraform plan -out=tfplan' to save binary execution plan. Apply with 'terraform apply tfplan' (no approval needed, plan already reviewed). View plan with 'terraform show tfplan' or 'terraform show -json tfplan'. Best practice: always use plan files in CI/CD pipelines to prevent drift between plan and apply stages. Plan files ensure exact changes reviewed are applied, critical for production safety. Delete plan files after apply for security (may contain sensitive data).
Create environments in Settings > Environments with protection rules: required reviewers (up to 6), wait timer (0-43200 minutes), deployment branches (main only, protected branches, all). Add environment-specific secrets and variables. Reference in workflow: jobs: deploy: environment: name: production; url: https://prod.example.com. Environments provide deployment history, status tracking, approval gates, preventing unauthorized production deployments. Use for staging, production, QA environments.
Use 'terraform apply -replace="aws_instance.web"' (Terraform 0.15.2+) instead of deprecated 'terraform taint'. Also works with plan: 'terraform plan -replace="aws_instance.web"'. Multiple resources: repeat flag for each. Advantages: clearer intent, better integration with plan/apply workflow, doesn't modify state file directly. Use when resource needs recreation due to corruption, config drift, or external changes. For multiple resources: -replace="resource1" -replace="resource2".
Replace deprecated only/except with 'rules:' for modern conditional execution. Use 'if:' for CI variables ($CI_COMMIT_BRANCH == 'main'), 'changes:' for file paths (['src/**/']), 'when:' for timing (on_success, manual, always). Multiple rules evaluated top-down, first match wins. Example: rules: - if: '$CI_COMMIT_BRANCH == "main"'; when: always; - changes: ['.md']; when: never. Combine conditions for complex logic. Rules offer better readability and flexibility than only/except.
Use built-in inventory plugins: aws_ec2, azure_rm, gcp_compute. Create plugin config file (aws_ec2.yml) with 'plugin: aws_ec2', regions, filters, keyed_groups for auto-grouping. Run 'ansible-inventory -i aws_ec2.yml --list' to verify. Use 'ansible-playbook -i aws_ec2.yml playbook.yml' for execution. Dynamic inventory auto-discovers instances at runtime, categorizes by tags, regions, security groups. No manual inventory updates needed. For custom sources, write script outputting JSON with _meta.hostvars structure.