Azure Architect Interview Questions
and Answers
What is Reapply and Redeploy in Azure?
In Azure, Reapply and Redeploy are two
different operations used to troubleshoot and fix issues with virtual machines
(VMs). Here's what each does:
1. Reapply
- Purpose:
Resets the VM's state and reapplies the VM properties (like networking
settings) without redeploying it.
- When
to use: If the VM is running but experiencing issues with
configuration settings, extensions, or networking.
2. Redeploy
- Purpose:
Moves the VM to a new host in Azure while keeping its existing settings.
- When
to use: If the VM is stuck in an unresponsive state or cannot be
connected to.
- Command
(Azure CLI):
Key Differences
Feature |
Reapply |
Redeploy |
What it does |
Refreshes VM settings |
Moves VM to a new host |
Impact |
No downtime (usually) |
VM is restarted (downtime) |
Fixes issues like |
Incorrect network settings, extensions |
Unresponsive VM, OS boot failures |
Why Do We Need to Re-protect After a Failover in Azure
Site Recovery (ASR)?
After performing a failover in Azure Site Recovery
(ASR), you must reprotect the VM to establish protection in the opposite
direction. Hereβs why:
- Failover
Switches the Replication Direction
- During
failover, the VM is moved from the primary site to the secondary
site (Azure or another datacentre).
- The
original primary site is now considered the failed site, and it is no
longer tracking changes.
- Replication
Must Be Re-established
- After
failover, the workload is running in the new location.
- However,
if another failure occurs, you have no active replication back to the
original site.
- Reprotection
re-establishes replication from the new primary site (previously
secondary) back to the original site.
- Failback
Preparation
- Without
Reprotect, you cannot failback to the original site.
- Reprotect
ensures that the data is synchronized back so that, once the original
site is restored, you can failback safely.
How Does Azure Site Recovery (ASR) Work?
Azure Site Recovery (ASR) is a disaster recovery (DR)
solution that replicates workloads between different locations (on-prem to
Azure or Azure region to region).
ASR Workflow:
- Replication
Setup
- You
configure source (on-premises/another Azure region) and target
(Azure/secondary region).
- ASR
replicates the VM data continuously.
- Replication
& Monitoring
- ASR
tracks incremental changes and stores them in Azure Storage.
- You
can monitor RPO (Recovery Point Objective) and RTO (Recovery
Time Objective).
- Failover
(Planned or Unplanned)
- In
case of an outage, ASR allows you to failover VMs to the secondary
site.
- You
can choose:
- Planned
Failover (no data loss, manual, for testing)
- Unplanned
Failover (during an outage)
- Reprotect
- After
failover, ASR stops old replication and sets up a new replication in
the reverse direction (from the secondary site to the original site).
- Failback
(If Needed)
- If
the original site is restored, you can failback VMs to the original
location.
Failover vs. Reprotect vs. Failback
Step |
Action |
Purpose |
Failover |
Move VM to secondary site |
Resume operations in case of failure |
Reprotect |
Set up replication in reverse |
Prepare for failback to original site |
Failback |
Move VM back to the original site |
Restore normal operations |
Where Should the Private Endpoint Be Created in a
Hub-and-Spoke Topology with Azure Private DNS?
In a Hub-and-Spoke network topology with Azure
Private DNS, the Private Endpoint should be created in the Spoke
VNet where the consuming resources (VMs, Apps, etc.) reside. Hereβs why:
Best Practices for Private Endpoint Placement
- Private
Endpoints Should Be in the Spoke VNet
- Private
Endpoints allow access to PaaS services (like Azure Storage, SQL,
etc.) over private IPs.
- Since
applications and VMs that need to consume the service reside in the Spoke
VNet, the Private Endpoint should be created there.
- This
keeps the traffic contained within the Spoke and avoids unnecessary
routing through the Hub.
- Azure
Private DNS Zone Should Be in the Hub (or Centrally Managed)
- The
Private DNS Zone should be created in the Hub VNet (or a
shared services VNet).
- The
Hub should be linked to all Spokes as a DNS resolution point.
- This
ensures that VMs in Spokes can resolve Private Endpoints correctly.
- Private
Endpoint DNS Resolution in Hub-and-Spoke
- If
a Private Endpoint is created in a Spoke VNet, it registers its private
IP in the Private DNS Zone.
- Spoke
VNets need DNS forwarding to the Hub to resolve Private Endpoint
names.
- A custom
DNS server (or Azure Private Resolver) in the Hub can help with
centralizing DNS resolution.
Example Deployment
Scenario: Accessing Azure SQL via Private Endpoint
- Private
DNS Zone: privatelink.database.windows.net (Created in Hub)
- Private
Endpoint for Azure SQL: Created in Spoke VNet
- DNS
Resolution:
- The
Spoke VNet is linked to the Private DNS Zone in the Hub.
- VMs
in the Spoke use Private Endpoint IP instead of public IP.
Key Takeaways
Component |
Placement |
Reason |
Private Endpoint |
Spoke VNet |
Keeps PaaS service traffic inside the Spoke |
Private DNS Zone |
Hub (or Shared VNet) |
Centralized DNS resolution |
DNS Forwarding |
Hub (DNS Resolver/Custom DNS) |
Ensures Spokes resolve Private Endpoints correctly |
Best Practices & Steps to Migrate Storage from One
Subscription to Another in Azure
When migrating Azure Storage Accounts between subscriptions,
you must manually transfer data, as Azure does not support direct
subscription moves for storage accounts. Below are best practices
and methods to ensure a smooth migration.
πΉ Best Practices Before
Migration
β
Plan the Migration Approach:
Choose the best method based on your storage type (Blob, Files, Tables,
Queues).
β
Check Subscription & RBAC Permissions: Ensure both source and target
subscriptions have required permissions.
β
Minimize Downtime: Schedule during off-peak hours and use incremental
copy to reduce impact.
β
Verify Networking & Security: Ensure VNet integration, private
endpoints, and firewall rules are correctly configured in the new
subscription.
β
Update Dependencies: Update connection strings for apps using the
old storage account.
πΉ Methods to Migrate
Azure Storage
πΉ 1. Use Azure Storage
Account Move (Limited Use)
- If
both subscriptions are in the same Azure AD tenant, you may
be able to move storage directly.
- Supported:
General-purpose v1, v2 accounts without private endpoints.
- Not
Supported: Classic storage, Premium storage, Private Endpoints,
Storage with CMK encryption.
Azure Portal:
- Go
to Storage Account β Change Subscription.
- Select
the destination subscription and resource group.
- Click
Move.
β
Best For: Simple moves
with no dependencies.
β οΈ
Limitations: Not supported for all storage accounts (use manual methods
below if unsupported).
πΉ 2. Migrate Storage Data
Manually (Recommended)
If you cannot move the storage account, migrate data using AzCopy,
Storage Explorer, or Data Factory.
πΉ A. AzCopy (Best for
Large Blob/File Transfers)
AzCopy is a fast, CLI-based tool to copy large amounts of
storage data.
Steps:
- Download
& Install AzCopy:
sh
CopyEdit
azcopy login
- Copy
Data to New Storage Account:
sh
CopyEdit
azcopy copy
"https://sourceaccount.blob.core.windows.net/container/*"
"https://destinationaccount.blob.core.windows.net/container"
--recursive=true
- Verify
Data Integrity:
- Compare
blob counts between source and destination.
- Run
checksum validation if needed.
β
Best For: Large-scale
data migration with high-speed performance.
β οΈ
Limitations: Needs scripting for automation.
πΉ B. Azure Storage
Explorer (GUI-Based Method)
For smaller migrations, Azure Storage Explorer
provides a drag-and-drop interface.
Steps:
- Install
Azure Storage Explorer.
- Connect
both source and destination storage accounts.
- Select
Blobs, Files, Queues, or Tables β Right-click β Copy & Paste.
β
Best For: Small to
medium migrations, easy for non-technical users.
β οΈ
Limitations: Slower than AzCopy for large data transfers.
πΉ C. Azure Data Factory
(Automated & Scalable)
For structured data (tables, blobs, etc.), Azure Data
Factory (ADF) provides an ETL pipeline to migrate data.
Steps:
- Create
an Azure Data Factory.
- Set
up a Copy Data Pipeline:
- Source:
Old storage account.
- Destination:
New storage account.
- Run
the pipeline and monitor transfer logs.
β
Best For: Scheduled
and automated data transfers.
β οΈ
Limitations: Requires Azure Data Factory setup.
πΉ 3. Migrate Azure Files
(SMB/NFS Shares)
If using Azure Files, use Azure File Sync or RoboCopy.
Using AzCopy:
sh
CopyEdit
azcopy copy
"https://sourceaccount.file.core.windows.net/share"
"https://destinationaccount.file.core.windows.net/share"
--recursive=true
Using Robocopy (Windows VM):
sh
CopyEdit
robocopy \\sourceaccount.file.core.windows.net\share \\destinationaccount.file.core.windows.net\share
/mir
β
Best For: File shares
requiring SMB/NFS access.
β οΈ
Limitations: Permissions (ACLs) may need to be manually migrated.
πΉ Post-Migration Steps
1οΈβ£ Validate Data: Compare
source and destination to ensure all data transferred successfully.
2οΈβ£ Update Application Connection Strings:
Update Azure Function Apps, Web Apps, VMs to use the new storage
account.
3οΈβ£ Check Security Settings: Ensure RBAC,
private endpoints, and firewall rules are properly configured.
4οΈβ£ Delete Old Storage Account (only after
validation) to avoid extra costs.
π‘ Summary: Which Method
Should You Choose?
Method |
Best For |
Pros |
Cons |
Move via Portal |
Simple storage moves |
Easy, no data copy required |
Not supported for all storage |
AzCopy |
Large blob/file migrations |
Fast, automated, secure |
CLI-based, requires scripting |
Storage Explorer |
Small migrations |
GUI-based, easy to use |
Slower for large data |
Data Factory |
Structured data migration |
Automated, scalable |
Requires pipeline setup |
Robocopy |
Azure Files migration |
Supports SMB/NFS |
Manual ACL migration required |
Comments
Post a Comment