Connecting isolated networks in Google Cloud without peering can be tricky, especially when overlapping IPs and security boundaries are involved.
In this post, I’ll walk through a practical solution I implemented to bridge two GCP networks using a lightweight TCP proxy. Rather than relying on complex subnet gymnastics or network peering, I used a dual-NIC VM and HAProxy to cleanly route traffic across projects. It’s a scalable pattern that keeps networks decoupled while still allowing secure communication between them.
To set the stage: imagine two Google Cloud projects, potentially even in separate organizations, each with its own isolated network. A client in Project A needs to access an API hosted in Project B. However, for security reasons, the API isn’t exposed to the public internet.
One option might be to peer the two networks directly, but that introduces potential issues with overlapping IP ranges. While it’s technically possible to resolve this through careful subnet planning, the approach adds significant complexity, especially when it comes to coordinating address spaces and network policies across teams. Worse, it doesn’t scale well; what happens when you need to connect a third network later on?
To address this, I chose to introduce a proxy between the two networks, allowing them to remain as decoupled as possible while still enabling secure communication.
Here we create a vpc-proxy in our project A, which will use a convenient subnet for project B. As this vpc-proxy is in our project, we can attach a VM network interface to it. As a result, we can create a VM with two network interfaces, connecting respectively to vpc-network-a and vpc-proxy.
A concrete example: imagine a Jenkins instance in Project A deploying to a Kubernetes cluster in Project B. Instead of exposing the Kubernetes API publicly, Jenkins can route its deployment traffic through the proxy VM, which forwards it securely to the API server.
Here’s how I configured the proxy setup:
L3 and L4 refer to the network and transport layers in the OSI model, we can solve network communications at L3 with proper routing tables that will make each network interface independent from the other. Then we can create TCP forwarding rules at L4 with haproxy, this will allow us to create TCP mappings between [proxy interface IP]:[source port] and [API destination IP]:[API destination port].
Let’s dive into the configuration. The first step is to define two separate routing tables, one for each network interface. Google Cloud’s documentation provides solid guidance here, but here’s how I set it up in the proxy VM:
# First routing table
echo "1 rt1" | sudo tee -a /etc/iproute2/rt_tables
sudo ip route add 192.168.0.1 src 192.168.0.2 dev ens4 table rt1
sudo ip route add default via 192.168.0.1 dev ens4 table rt1
sudo ip rule add from 192.168.0.2/32 table rt1
sudo ip rule add to 192.168.0.2/32 table rt1
# Second routing table
echo "2 rt2" | sudo tee -a /etc/iproute2/rt_tables
sudo ip route add 192.168.1.1 src 192.168.1.2 dev ens5 table rt2
sudo ip route add default via 192.168.1.1 dev ens5 table rt2
sudo ip rule add from 192.168.1.2/32 table rt2
sudo ip rule add to 192.168.1.2/32 table rt2
Without specifying the source IP, any traffic from the proxy VM defaults to the primary interface (nic0), which in this case connects to Network A. So, a basic ping 10.3.0.3 would only reach a client in Project A.
To activate the custom routing tables and properly route traffic through the correct interface, we need to explicitly define the source IP in our commands. You can also monitor traffic on the target VM with:
sudo tcpdump -i ens4 -qtln icmp
Now, let’s verify that routing is working as expected from the proxy VM:
# Reach the client in Network A
ping -I 192.168.0.2 10.3.0.3
# Reach the server in Network B
ping -I 192.168.1.2 10.3.0.4
# If a web server is running on the server
curl --interface 192.168.1.2 http://10.3.0.4
With routing in place, the final step is to install and configure HAProxy to handle the TCP forwarding.
First, install HAProxy:
sudo apt install haproxy
Then, edit the configuration file at /etc/haproxy/haproxy.cfg with the following:
global
defaults
timeout client 30s
timeout server 30s
timeout connect 30s
frontend network-a-frontend
bind 192.168.0.2:8000
default_backend network-b-server
backend network-b-server
mode tcp
source 192.168.1.2
server upstream 10.3.0.4:443
n the backend section, the source directive ensures that traffic is routed using the correct network interface and corresponding routing table.
You could also define additional frontends and backends to route traffic in the opposite direction—e.g., from Project B to A.
To apply the configuration:
sudo service haproxy restart
Now test the setup from the client in Project A:
curl 192.168.0.2:8000
If everything is set up correctly, the request should be forwarded to the API in Project B 🎉
This solution keeps the two networks decoupled while allowing TCP communication. Maintaining the proxy is as simple as managing HAProxy mappings.