Troubleshooting
Azure Function App
Function not triggering on schedule
Symptoms: Function doesn’t execute automatically.
Solutions:
- Verify the Function App is started (not stopped): Azure Portal → Function App → Overview → Start
- Check
COPY_PARQUET_SCHEDULEis set correctly in Application Settings - Verify CRON syntax includes the seconds field (6 parts, not 5):
0 */15 * * * * - Check Application Insights for startup errors
- Restart the Function App:
az functionapp restart --name <function-app-name> --resource-group <rg-name>
Function App fails to start
Symptoms: Deployed but doesn’t execute or functions don’t appear.
Solutions:
- Check logs: Function App → Log Stream
- Verify
requirements.txtdependencies can be installed - Verify
host.jsonandfunction_app.pysyntax - The
SCM_DO_BUILD_DURING_DEPLOYMENTsetting should betrue(set by Terraform) - If ZIP deploy hangs, deploy manually:
az functionapp deployment source config-zip \ --resource-group <rg-name> \ --name <function-app-name> \ --src functionapp.zip
S3 access denied (403)
Symptoms: Function logs show AWS authorization errors.
Solutions:
- Verify
S3_ACCESS_KEY_IDandS3_SECRET_ACCESS_KEYare correct - Ensure the IAM policy allows
s3:ListBucketands3:GetObject - Check S3 bucket name and region match the configuration
- Contact Haltian to verify credentials are active
Out of memory or timeout
Symptoms: Function fails with timeout or memory errors.
Solutions:
- Reduce
measurements_time_range_daysto fetch fewer files per run - Increase timeout in
host.json(max 10 minutes for Consumption plan):{ "functionTimeout": "00:10:00" } - Consider upgrading to Premium Plan (EP1+) for more resources
OneLake Issues
AADSTS500011: Resource not found
Symptoms: Token request fails with “The resource principal named {resource} was not found in the tenant.”
Cause: Using incorrect resource/audience values in token requests. This is the most common OneLake authentication error.
Solutions:
- Verify the custom OneLake app was created:
terraform output onelake_app_client_id - Ensure you’re using the correct client ID and secret from the
infra/onelakemodule - The token should be requested for scope:
https://onelake.dfs.fabric.microsoft.com/.default - Wait 5–10 minutes for app registration to propagate
OneLake upload fails (403 Forbidden)
Symptoms: Function authenticates but can’t write to Lakehouse.
Solutions:
- Get the Function App’s managed identity principal ID:
terraform output function_app_identity_principal_id - In Fabric Portal, go to your workspace → Settings → Manage access
- Add the principal ID with Contributor role
- Wait 5–15 minutes for permissions to propagate
- Verify Fabric capacity is running (not paused)
Fabric workspace not found
Symptoms: Cannot access workspace after deployment.
Solutions:
- Verify workspace exists:
terraform output fabric_workspace_id - Check Fabric Portal directly
- Ensure the Fabric capacity is running (not paused)
- Wait 5–10 minutes for propagation after creation
Insufficient privileges for directory roles
Symptoms: terraform apply fails on role assignment.
Solutions:
- Set
assign_directory_roles = falseandassign_graph_permissions = falseinterraform.tfvars - Deploy with Terraform
- Have a Global Admin manually grant consent: Azure AD → App registrations → Your app → API permissions → Grant admin consent
Storage Account Issues
Storage account name not available
Symptoms: “The storage account name is already taken.”
Solutions:
- Choose a different, globally unique name
- Set
upload_storage_account_nameexplicitly interraform.tfvars - Or use an existing storage account:
storage_use_existing = true
Access denied when uploading (403)
Symptoms: Function can’t write to storage account.
Solutions:
- Using connection string: Verify
STORAGE_CONNECTION_STRINGis correct in app settings - Using managed identity: Assign the “Storage Blob Data Contributor” role:
az role assignment create \ --role "Storage Blob Data Contributor" \ --assignee $(cd azure-function/terraform && terraform output -raw function_app_identity_principal_id) \ --scope /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<account> - Wait 5–10 minutes for RBAC propagation
Container not found
Symptoms: Application reports container doesn’t exist.
Solutions:
- Verify:
terraform output storage_container_name - Check in Azure Portal: Storage Account → Containers
- Ensure the container name in Function App settings matches
Power BI Issues
Can’t connect to Storage Account from Power BI Desktop
Symptoms: Power BI shows authentication or access errors.
Solutions:
- Ensure you’re signed in to Azure in Power BI Desktop (top-right corner)
- Verify your user has Storage Blob Data Reader role on the storage account
- In Terraform, use the
blob_readers_emailvariable to grant access:blob_readers_email = ["your-email@company.com"] - If using firewall rules, ensure your IP is allowed
Can’t connect to Lakehouse from Power BI Desktop
Symptoms: Lakehouse doesn’t appear in Power BI data sources.
Solutions:
- Verify you have access to the Fabric workspace
- Ensure the Fabric capacity is running (not paused)
- Try using Get Data → Microsoft Fabric → Lakehouses instead of OneLake data hub
- Check you’re signed in with the correct Azure account
No data appears after loading
Symptoms: Tables are empty in Power BI.
Solutions:
- Verify the Azure Function has executed successfully (check Application Insights)
- For Storage Account: check files exist in the container using Azure Portal or CLI
- For OneLake: check files exist in the Lakehouse Files section
- In Power Query, verify the
Text.Containsfilter matches your measurement type names - Click Refresh in Power BI Desktop to reload data
Power Query errors on Parquet files
Symptoms: “Expression.Error” or data type conversion failures.
Solutions:
- Check column names match your actual Parquet schema
- Verify the storage account name in Power Query M code is correct
- Ensure Parquet files are not corrupted — download one manually and verify
- If column types differ, adjust the
Table.TransformColumnTypesstep
Terraform Issues
Provider initialization fails
Symptoms: terraform init fails to download providers.
Solutions:
- Verify internet connectivity
- Check Terraform version:
terraform -version(must be ≥ 1.5.0) - Clear provider cache:
rm -rf .terraform→terraform init
State lock errors
Symptoms: Terraform can’t acquire state lock.
Solutions:
- If a previous run was interrupted:
terraform force-unlock <lock-id> - Ensure no other Terraform process is running against the same state
- Consider using a remote backend for team environments
Destroy fails with dependency errors
Symptoms: terraform destroy fails with resource dependency issues.
Solutions:
- Always destroy Function App first, then infrastructure:
cd azure-function/terraform && terraform destroy cd ../../infra/onelake && terraform destroy # or infra/storageaccount - If individual resources fail, use targeted destroy:
terraform destroy -target=azurerm_linux_function_app.func
Verification Commands
Use these commands to quickly check the health of your deployment:
# Check Function App status
FUNC=$(cd azure-function/terraform && terraform output -raw function_app_name)
RG=$(cd azure-function/terraform && terraform output -raw resource_group_name)
az functionapp show --name $FUNC --resource-group $RG --query state -o tsv
# View recent function executions
az monitor app-insights query \
--app $(cd azure-function/terraform && terraform output -raw application_insights_app_id) \
--analytics-query "traces | where timestamp > ago(1h) | order by timestamp desc | take 10" \
--output table
# List files in Storage Account
az storage blob list \
--account-name $(cd infra/storageaccount && terraform output -raw storage_account_name) \
--container-name incoming \
--auth-mode login \
--output table
# Check OneLake files (via Fabric Portal link)
cd infra/onelake
WORKSPACE=$(terraform output -raw fabric_workspace_id)
LAKEHOUSE=$(terraform output -raw fabric_lakehouse_id)
echo "https://app.fabric.microsoft.com/groups/$WORKSPACE/lakehouses/$LAKEHOUSE"
Enable Debug Logging
For detailed troubleshooting, enable DEBUG level logging:
Via Terraform:
# In azure-function/terraform/terraform.tfvars
log_level = "DEBUG"
Then terraform apply.
Via Azure Portal:
- Go to Function App → Configuration
- Add/update setting:
LOG_LEVEL=DEBUG - Save and restart
Getting Help
If you’re unable to resolve an issue:
- Check Application Insights logs for detailed error messages
- Enable DEBUG logging and reproduce the issue
- Contact Haltian support with the error details and your deployment configuration