December 01, 2025·9 min read
Teams either wait days (or months) for infra tickets to be provisioned, or let developers create Azure resources directly—which leads to inconsistent, insecure, or untracked infrastructure. Clouds fill with resources created outside Terraform, making governance and cost control harder.
I built a self-service portal to solve this: developers provision Azure resources through a simple UI, while Terraform and GitOps handle everything behind the scenes.
Live: https://portal.chrishouse.io
What It Does
- Blueprint-based forms that generate Terraform PRs
- GitHub Actions performs plan, apply, and state handling
- Real-time cost, health, and deployment status
- Shows Azure resources not managed by Terraform
- Resource graph to visualize dependencies and ownership
- Backstage integration for service catalog and scaffolding
The entire platform bootstraps itself—the Container App, Storage, and Front Door were all deployed through the portal using the same blueprints it provides.
The Architecture
Frontend
| Component | Technology |
|---|---|
| Framework | React SPA |
| Hosting | Azure Storage Static Websites |
| CDN | Azure Front Door |
| Auth | GitHub OAuth |
| Forms | Dynamic generation from blueprint metadata |
| Visualization | Azure Resource Graph for dependency mapping |
The frontend renders forms dynamically based on blueprint definitions. Each blueprint defines:
- Required and optional parameters
- Validation rules
- Cost estimation hooks
- Dependencies on other blueprints
// Example blueprint metadata
{
"name": "azure-container-app",
"displayName": "Container App",
"description": "Deploy a containerized application to Azure Container Apps",
"parameters": [
{
"name": "app_name",
"type": "string",
"required": true,
"validation": "^[a-z0-9-]+$"
},
{
"name": "container_image",
"type": "string",
"required": true
},
{
"name": "cpu",
"type": "select",
"options": ["0.25", "0.5", "1.0", "2.0"],
"default": "0.5"
},
{
"name": "memory",
"type": "select",
"options": ["0.5Gi", "1Gi", "2Gi", "4Gi"],
"default": "1Gi"
}
],
"costEstimate": {
"type": "azure-container-apps",
"factors": ["cpu", "memory", "replicas"]
}
}Backend
| Component | Technology |
|---|---|
| Runtime | Node.js + Express |
| Hosting | Azure Container Apps |
| Integration | GitHub App for PR automation |
| State | Terraform remote state in Azure Storage |
| Webhooks | GitHub webhook handling for deployment events |
The backend handles:
- Terraform code generation from blueprint parameters
- Variable validation before PR creation
- Azure pricing API integration for cost estimates
- GitHub App actions (create PRs, post comments, trigger workflows)
- Resource discovery for unmanaged Azure resources
// Simplified PR creation flow
async function createDeploymentPR(blueprint, parameters, user) {
// 1. Generate Terraform code from template
const terraformCode = await generateTerraform(blueprint, parameters);
// 2. Create branch
const branchName = `deploy/${blueprint.name}/${Date.now()}`;
await github.createBranch(branchName);
// 3. Commit generated files
await github.commitFiles(branchName, [
{ path: `deployments/${parameters.app_name}/main.tf`, content: terraformCode },
{ path: `deployments/${parameters.app_name}/variables.tf`, content: variablesFile },
{ path: `deployments/${parameters.app_name}/terraform.tfvars`, content: tfvarsFile }
]);
// 4. Create PR with cost estimate in description
const costEstimate = await estimateCost(blueprint, parameters);
const pr = await github.createPR({
title: `[Portal] Deploy ${blueprint.displayName}: ${parameters.app_name}`,
body: generatePRDescription(parameters, costEstimate),
head: branchName,
base: 'main'
});
// 5. Trigger plan workflow
await github.triggerWorkflow('terraform-plan.yml', { pr_number: pr.number });
return pr;
}GitOps Pipeline
The real magic happens in GitHub Actions. When a PR is created:
# .github/workflows/terraform-plan.yml
name: Terraform Plan
on:
pull_request:
paths:
- 'deployments/**'
jobs:
plan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Configure Azure credentials
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Find changed deployments
id: changes
run: |
CHANGED=$(git diff --name-only ${{ github.event.pull_request.base.sha }} | grep '^deployments/' | cut -d'/' -f2 | sort -u)
echo "deployments=$CHANGED" >> $GITHUB_OUTPUT
- name: Terraform Init & Plan
run: |
for deployment in ${{ steps.changes.outputs.deployments }}; do
cd deployments/$deployment
terraform init -backend-config="key=$deployment.tfstate"
terraform plan -out=plan.tfplan
terraform show -json plan.tfplan > plan.json
done
- name: Post plan to PR
uses: actions/github-script@v7
with:
script: |
const plan = require('./deployments/${{ steps.changes.outputs.deployments }}/plan.json');
const summary = formatPlanSummary(plan);
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `## Terraform Plan\n\n${summary}\n\n**Resources:**\n- To create: ${plan.resource_changes.filter(r => r.change.actions.includes('create')).length}\n- To update: ${plan.resource_changes.filter(r => r.change.actions.includes('update')).length}\n- To destroy: ${plan.resource_changes.filter(r => r.change.actions.includes('delete')).length}`
});When the PR is merged:
# .github/workflows/terraform-apply.yml
name: Terraform Apply
on:
push:
branches: [main]
paths:
- 'deployments/**'
jobs:
apply:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Configure Azure credentials
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Find changed deployments
id: changes
run: |
CHANGED=$(git diff --name-only HEAD~1 | grep '^deployments/' | cut -d'/' -f2 | sort -u)
echo "deployments=$CHANGED" >> $GITHUB_OUTPUT
- name: Terraform Apply
run: |
for deployment in ${{ steps.changes.outputs.deployments }}; do
cd deployments/$deployment
terraform init -backend-config="key=$deployment.tfstate"
terraform apply -auto-approve
done
- name: Update portal status
run: |
curl -X POST "${{ secrets.PORTAL_WEBHOOK_URL }}/deployment-complete" \
-H "Content-Type: application/json" \
-d '{"deployment": "${{ steps.changes.outputs.deployments }}", "status": "success"}'Backstage Integration
The portal integrates with Backstage for service catalog and scaffolding. This creates a unified developer experience:
Service Catalog Sync
Every resource deployed through the portal automatically registers in Backstage:
# Generated catalog-info.yaml for each deployment
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${app_name}
description: ${description}
annotations:
github.com/project-slug: ${github_repo}
backstage.io/techdocs-ref: dir:.
azure.com/resource-group: ${resource_group}
azure.com/subscription: ${subscription_id}
links:
- url: https://${app_name}.azurecontainerapps.io
title: Live Site
icon: dashboard
- url: https://portal.azure.com/#resource${resource_id}
title: Azure Portal
icon: cloud
tags:
- azure
- container-app
- terraform-managed
spec:
type: service
lifecycle: production
owner: ${owner}
dependsOn:
- resource:${resource_group}Software Templates
Backstage software templates use the same blueprints as the portal. When a developer scaffolds a new service:
- Backstage creates the GitHub repo with CI/CD workflows
- The portal's blueprint form is embedded for infrastructure provisioning
- ArgoCD application is generated for Kubernetes deployments
- Everything links back to the service catalog
# backstage/templates/azure-fullstack/template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: azure-fullstack-template
title: Azure Full-Stack Application
description: Create a full-stack app with React frontend and Node.js backend on Azure
spec:
owner: platform-team
type: service
parameters:
- title: Application Details
required:
- name
- description
properties:
name:
title: Name
type: string
pattern: '^[a-z0-9-]+$'
description:
title: Description
type: string
- title: Infrastructure
required:
- environment
- region
properties:
environment:
title: Environment
type: string
enum: ['dev', 'staging', 'production']
region:
title: Azure Region
type: string
enum: ['eastus', 'westus2', 'westeurope']
enableDatabase:
title: Include PostgreSQL Database
type: boolean
default: false
enableRedis:
title: Include Redis Cache
type: boolean
default: false
steps:
- id: fetch-base
name: Fetch Base Template
action: fetch:template
input:
url: ./skeleton
values:
name: ${{ parameters.name }}
description: ${{ parameters.description }}
- id: publish-github
name: Publish to GitHub
action: publish:github
input:
repoUrl: github.com?owner=crh225&repo=${{ parameters.name }}
description: ${{ parameters.description }}
- id: create-infrastructure
name: Create Azure Infrastructure
action: http:backstage:request
input:
method: POST
path: /api/portal/deployments
body:
blueprint: azure-fullstack
parameters:
app_name: ${{ parameters.name }}
environment: ${{ parameters.environment }}
region: ${{ parameters.region }}
enable_database: ${{ parameters.enableDatabase }}
enable_redis: ${{ parameters.enableRedis }}
- id: register-catalog
name: Register in Catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps['publish-github'].output.repoContentsUrl }}
catalogInfoPath: /catalog-info.yamlArgoCD Integration
For Kubernetes workloads, the portal generates ArgoCD Application manifests:
# Generated ArgoCD Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ${app_name}
namespace: argocd
labels:
app.kubernetes.io/part-of: portal-deployments
annotations:
portal.chrishouse.io/blueprint: ${blueprint_name}
portal.chrishouse.io/deployed-by: ${user}
portal.chrishouse.io/deployed-at: ${timestamp}
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://github.com/crh225/${app_name}.git
targetRevision: main
path: helm
helm:
releaseName: ${app_name}
valueFiles:
- values.yaml
parameters:
- name: image.repository
value: ghcr.io/crh225/${app_name}
- name: image.tag
value: ${image_tag}
destination:
server: https://kubernetes.default.svc
namespace: ${namespace}
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueResource Discovery: Finding the Unmanaged
One of the most valuable features is discovering Azure resources that exist outside Terraform. The portal queries Azure Resource Graph and compares against known Terraform state:
async function discoverUnmanagedResources(subscriptionId) {
// 1. Get all resources from Azure Resource Graph
const azureResources = await resourceGraph.query(`
Resources
| where subscriptionId == '${subscriptionId}'
| project id, name, type, resourceGroup, location, tags
`);
// 2. Get all resources from Terraform state files
const terraformResources = new Set();
const stateFiles = await listStateFiles();
for (const stateFile of stateFiles) {
const state = await downloadState(stateFile);
state.resources.forEach(r => {
terraformResources.add(r.instances[0]?.attributes?.id);
});
}
// 3. Find resources not in Terraform
const unmanaged = azureResources.filter(r => !terraformResources.has(r.id));
// 4. Categorize by risk level
return unmanaged.map(r => ({
...r,
riskLevel: calculateRiskLevel(r),
estimatedCost: await estimateResourceCost(r),
recommendation: generateRecommendation(r)
}));
}
function calculateRiskLevel(resource) {
// High risk: databases, key vaults, networking
const highRiskTypes = [
'Microsoft.Sql/servers',
'Microsoft.KeyVault/vaults',
'Microsoft.Network/virtualNetworks',
'Microsoft.Network/publicIPAddresses'
];
// Medium risk: compute, storage
const mediumRiskTypes = [
'Microsoft.Compute/virtualMachines',
'Microsoft.Storage/storageAccounts',
'Microsoft.ContainerRegistry/registries'
];
if (highRiskTypes.includes(resource.type)) return 'high';
if (mediumRiskTypes.includes(resource.type)) return 'medium';
return 'low';
}The UI displays unmanaged resources with:
- Risk level badges (high/medium/low)
- Estimated monthly cost
- One-click "Import to Terraform" action
- Ownership lookup from tags or activity logs
Cost Estimation
Before any deployment, developers see estimated costs:
async function estimateCost(blueprint, parameters) {
const pricingClient = new AzureRetailPricesClient();
// Map blueprint resources to Azure pricing SKUs
const resourceCosts = await Promise.all(
blueprint.resources.map(async (resource) => {
const sku = resolveSkuFromParameters(resource, parameters);
const prices = await pricingClient.query({
armRegionName: parameters.region,
serviceFamily: resource.serviceFamily,
skuName: sku
});
return {
resource: resource.name,
sku,
hourlyRate: prices[0]?.retailPrice || 0,
monthlyEstimate: (prices[0]?.retailPrice || 0) * 730
};
})
);
const totalMonthly = resourceCosts.reduce((sum, r) => sum + r.monthlyEstimate, 0);
return {
resources: resourceCosts,
totalMonthly,
totalYearly: totalMonthly * 12,
currency: 'USD'
};
}The PR description includes a cost breakdown:
## 💰 Cost Estimate
| Resource | SKU | Monthly |
|----------|-----|---------|
| Container App | 0.5 vCPU, 1Gi | $36.50 |
| PostgreSQL Flexible | B1ms | $12.41 |
| Storage Account | Standard_LRS | $2.30 |
**Total: ~$51.21/month**
_Estimates based on Azure retail pricing. Actual costs may vary._The Resource Graph
Visualizing dependencies helps developers understand what they're deploying:
// Build dependency graph from Terraform state and Azure Resource Graph
async function buildResourceGraph(deploymentName) {
const state = await getTerraformState(deploymentName);
const nodes = [];
const edges = [];
for (const resource of state.resources) {
nodes.push({
id: resource.instances[0].attributes.id,
label: resource.name,
type: resource.type,
status: await getResourceHealth(resource.instances[0].attributes.id)
});
// Find dependencies from Terraform
if (resource.instances[0].dependencies) {
for (const dep of resource.instances[0].dependencies) {
edges.push({
source: resource.instances[0].attributes.id,
target: dep
});
}
}
}
// Enrich with Azure Resource Graph relationships
const azureGraph = await resourceGraph.query(`
ResourceContainers
| where id == '${deploymentResourceGroup}'
| project-away tenantId
| join kind=leftouter (
Resources | project id, name, type, resourceGroup
) on resourceGroup
`);
return { nodes, edges, azureGraph };
}The frontend renders this as an interactive graph where developers can:
- Click nodes to see resource details
- See health status (healthy/degraded/unhealthy)
- Trace dependencies upstream and downstream
- Filter by resource type or status
Self-Hosting: The Portal Deploys Itself
The ultimate test: can the portal deploy itself?
Yes. The Container App, Storage Account, and Front Door that host the portal were all created using portal blueprints. The bootstrap process:
- Initial manual setup: Create resource group, storage for Terraform state, GitHub App
- Deploy backend: Use
azure-container-appblueprint - Deploy frontend: Use
azure-static-websiteblueprint - Deploy CDN: Use
azure-front-doorblueprint - Configure DNS: Point
portal.chrishouse.ioto Front Door
From that point, all updates go through the portal itself.
What I Learned
-
Blueprint design matters more than UI polish. Well-designed blueprints with sensible defaults reduce form complexity dramatically.
-
GitOps is the right abstraction for infrastructure. PRs provide review, rollback, and audit trails automatically.
-
Cost visibility changes behavior. When developers see "$500/month" before clicking deploy, they ask questions.
-
Unmanaged resource discovery is surprisingly valuable. Every organization has shadow IT. Making it visible is the first step.
-
Backstage and custom portals can coexist. Backstage handles catalog and scaffolding well; custom UIs handle specialized workflows better.
What's Next
- Policy-as-code: Integrate OPA/Gatekeeper to enforce standards before PRs are created
- Drift detection: Alert when deployed resources diverge from Terraform state
- Cost anomaly alerts: Notify when spending exceeds estimates
- Multi-cloud: Extend blueprints to AWS and GCP
The portal is live at https://portal.chrishouse.io—built and hosted entirely in my personal Azure lab.
Enjoyed this post? Give it a clap!
Comments