Hello @ Pulugujju M Rao,
Hope you're doing well!
Architecture Overview
Internet Users
↓
Cloudflare Reverse Proxy (optional: WAF, CDN)
↓
Azure Standard Public Load Balancer (1 Public IP)
↓
Palo Alto VM-Series NVA (deployed in Hub VNET)
↓
Azure Internal Load Balancer (ILB)
↓
App Services (connected via Private Endpoint or VNET Integration in Spoke VNET)
Q: can please provide the detailed packet flow for ingress and egress traffic for each layer?
Ingress Traffic Flow (User to App Service)
Internet to Cloudflare: The user’s request first hits Cloudflare’s reverse proxy.
Cloudflare to Public Load Balancer: Cloudflare forwards the request to your Public Load Balancer’s public IP address.
Public LB to Palo Alto (Untrust): The Public Load Balancer receives the traffic (like HTTPS on port 443) and uses a load-balancing rule to send it to the private IP of an active Palo Alto firewall's untrust interface.
Palo Alto Inspection & DNAT: The firewall examines the request using its Security Policy to decide whether it should allow it. If allowed, a DNAT rule rewrites the destination IP to point to the private IP of the App Service (or an internal load balancer in front of the App Service, if applicable). The firewall then routes the packet out through its trust interface into the Spoke VNet.
Palo Alto (Trust) to App Service: The packet travels across the VNet peering—from the Hub VNet to the Spoke VNet—until it reaches the App Service.
Egress Traffic Flow (App Service to Internet):
App Service to Spoke Subnet: The App Service initiates an outbound connection, such as an API call.
UDR Applied: A User-Defined Route (UDR) set to 0.0.0.0/0 catches the outbound packet on the AppService subnet and routes it to the frontend IP of the Internal Load Balancer (ILB) in the Hub VNet.
Internal LB to Palo Alto (Trust): The ILB receives the packet and, using an HA Ports rule, forwards it to the trust interface of the active Palo Alto firewall.
Palo Alto Inspection & SNAT: At the firewall, the packet is checked against outbound policies. A SNAT rule then replaces the source IP (the App Service's private IP) with the IP of the firewall’s untrust interface. This ensures that the receiving system on the internet knows where to send its response.
Palo Alto (Untrust) to Internet: The firewall forwards the packet out to the public internet.
Q: what are the benefits of Enable Floating IP in the Load Balancing Rule and how its working?
Floating IP:
Floating IP, also known as Direct Server Return (DSR), is a setting available in Azure Load Balancer rules.
How it Works (Standard/Disabled):
By default, the load balancer rewrites the destination IP from its frontend IP to the backend server’s private IP. So, the backend sees its own IP as the destination.
How it Works (Floating IP/Enabled):
With Floating IP enabled, the load balancer leaves the destination IP unchanged. The packet reaches the backend (such as a Palo Alto firewall) with the destination IP still set to the load balancer’s frontend IP. For this to work, the backend NVA must be configured with a loopback or secondary IP that matches the load balancer's frontend IP.
Supporting Documents:
https://v4.hkg1.meaqua.org/en-us/azure/load-balancer/load-balancer-floating-ip
High Availability for NVAs (Internal LB):
Using an Internal Load Balancer with Floating IP enabled (via the HA Ports rule) ensures both firewalls in the backend pool are identically configured. The UDR points to the ILB’s frontend IP, and the LB forwards traffic to the active firewall without changing the destination IP—allowing smooth and seamless failover.
Simplified Routing:
This reduces the complexity of routing on the NVA. It prevents the need for double NAT (first by the Load Balancer, then by the firewall) for outbound internet traffic.
Supporting Documents:
https://v4.hkg1.meaqua.org/en-us/azure/architecture/networking/guide/network-virtual-appliance-high-availability
Q. Public Load Balancer performing any Network Address Translation (NAT) on the inbound and outbound traffic.
Public Load Balancer and NAT (Network Address Translation)
Azure’s Standard Public Load Balancer handles NAT for both inbound and outbound traffic by default:
Inbound NAT:
For incoming traffic, the load balancer uses Destination NAT (DNAT). It replaces the public destination IP and port with the private IP and port of a backend virtual machine, as defined in load balancing or inbound NAT rules.
Outbound NAT:
For outbound traffic from a backend server without a public IP, the load balancer uses Source NAT (SNAT). It replaces the private source IP with its own frontend public IP. This enables the backend to access the internet using the same outbound IP. You can customize this behavior using outbound rules.
Supporting Documents:
https://v4.hkg1.meaqua.org/en-us/azure/load-balancer/load-balancer-overview
https://v4.hkg1.meaqua.org/en-us/azure/load-balancer/outbound-rules
https://github.com/PaloAltoNetworks/azure-terraform-vmseries-fast-ha-failover
Kindly let us know if the above helps or you need further assistance on this issue.
Please "Accept the answer" if the information helped you. This will help us and others in the community as well.