What Happens When You Type 'google.com' and Press Enter?
My experience being asked this classic interview question β a complete walkthrough of the networking journey from browser to server and why it matters for DevOps.
I'll never forget being asked this question during a DevOps interview. At first, I thought it was just another trivia question, but the interviewer stopped me halfway through and said, "Now tell me why this matters for the role you're applying for."
That's when it clicked. This wasn't about reciting a memorized answer β it was about demonstrating that I understood every layer of the network stack and, more importantly, where I could intervene as a DevOps engineer.
π― Why I Was Asked This Question
The interviewer explained that as a DevOps engineer, I'd regularly need to:
- Manipulate
/etc/hoststo redirect traffic for local testing - Configure firewall rules to route traffic to specific containers
- Debug DNS resolution issues in Kubernetes clusters
- Set up service meshes and ingress controllers
- Implement traffic routing for blue/green deployments or canary releases
Understanding the complete flow meant I'd know exactly which layer to target when troubleshooting or implementing solutions.
π The Complete Journey (My Answer)
1. Browser Checks Its Cache
I started by explaining that before doing anything, the browser checks if it already knows the IP address for google.com:
- Browser cache: Has this domain been resolved recently?
- Operating system cache: Does the OS have it cached?
If found, skip ahead to step 5.
2. Operating System Checks /etc/hosts
This is where I paused to emphasize the DevOps angle.
Before making any DNS requests, the OS checks /etc/hosts for static hostname mappings:
# /etc/hosts
127.0.0.1 localhost
192.168.1.100 google.com # Override DNS for testing
DevOps use case:
- Redirecting production domains to local development servers
- Testing container networking by mapping service names to IPs
- Simulating production environments locally
If there's a match, the OS uses that IP and skips DNS entirely.
3. DNS Resolution
If not found in /etc/hosts, the OS performs a DNS lookup:
- Queries the configured DNS resolver (usually from
/etc/resolv.confor DHCP) - Recursive DNS query:
- Local DNS resolver β Root DNS servers
- Root servers β
.comTLD servers - TLD servers β Google's authoritative nameservers
- Returns the IP address (e.g.,
142.250.185.46)
DevOps considerations:
- DNS caching layers (systemd-resolved, dnsmasq)
- Corporate DNS servers or internal DNS for service discovery
- Kubernetes DNS (CoreDNS) for pod-to-pod communication
4. Firewall Rules & Network Stack
Before the packet leaves your machine, it passes through:
- Application layer (browser)
- Operating system network stack
- Firewall rules (
iptables,nftables,pf)
DevOps use case:
# Redirect traffic to a local container
iptables -t nat -A OUTPUT -p tcp --dport 80 -d 142.250.185.46 -j DNAT --to-destination 127.0.0.1:8080
This is essential for:
- Docker/Kubernetes networking (redirecting to containers)
- Service mesh sidecar proxies (Envoy, Linkerd)
- Load balancer testing
5. Routing Table & Gateway
The OS checks the routing table to determine where to send the packet:
route -n
# or
ip route show
Typical flow:
- If destination is on the local network β send directly
- If external β send to default gateway (your router)
DevOps consideration: In containerized environments (Docker, Kubernetes), virtual network interfaces and custom routing tables are configured to route traffic between containers and the host.
6. Your Router (Home/Office Network)
Your router:
- Performs NAT (Network Address Translation)
- Translates your private IP (e.g.,
192.168.1.10) to a public IP
- Translates your private IP (e.g.,
- Forwards the packet to your ISP
DevOps parallel: This is similar to how Kubernetes Services use kube-proxy to NAT pod IPs to service IPs.
7. ISP and Internet Backbone
The packet travels through:
- ISP's network
- Internet backbone routers (BGP routing)
- Google's edge network (anycast routing)
Each hop uses routing protocols (BGP, OSPF) to determine the best path.
8. Google's Load Balancer
The packet hits Google's infrastructure:
- Anycast routing brings you to the nearest Google datacenter
- Global Load Balancer (Layer 4/7) distributes traffic
- Edge servers handle TLS termination
DevOps equivalent:
- AWS ALB/NLB, GCP Load Balancer
- NGINX, HAProxy, Envoy
- Kubernetes Ingress Controllers
9. TLS Handshake
Before sending any HTTP data:
- TCP 3-way handshake (SYN, SYN-ACK, ACK)
- TLS handshake:
- Client sends supported cipher suites
- Server responds with certificate
- Client verifies certificate against CA
- Encrypted session established
DevOps tasks:
- Managing TLS certificates (Let's Encrypt, cert-manager)
- Configuring TLS termination at load balancers
- Implementing mTLS for service-to-service auth
10. HTTP Request
Finally, your browser sends:
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0...
Accept: text/html...
Google's backend:
- Routes to appropriate microservice
- Generates the response
- Returns HTML, CSS, JavaScript
11. Response & Rendering
- Response travels back through all the same layers (in reverse)
- Browser receives HTML and starts rendering
- Additional requests for CSS, JS, images, etc.
- Page is displayed
π οΈ Practical DevOps Applications
Testing with /etc/hosts
# Redirect production domain to staging
echo "10.0.1.50 api.production.com" | sudo tee -a /etc/hosts
Docker Networking Example
# Create a container and test connectivity
docker run -d --name web -p 8080:80 nginx
# Override DNS to route to container
echo "127.0.0.1 myapp.local" | sudo tee -a /etc/hosts
# Now visiting http://myapp.local:8080 routes to the container
Kubernetes Service Routing
apiVersion: v1
kind: Service
metadata:
name: google-proxy
spec:
type: ExternalName
externalName: google.com
This creates a DNS entry in the cluster that routes google-proxy.default.svc.cluster.local β google.com.
iptables Traffic Redirection
# Redirect all HTTP traffic to a local proxy
iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-port 8888
π§ What I Learned
After that interview, I realized this question wasn't about showing off theoretical knowledge. It was about proving I could:
- Debug connectivity issues at the right layer (DNS? Routing? Firewall?)
- Implement traffic manipulation for testing and deployment strategies
- Architect reliable systems with proper load balancing and failover
- Secure networks with appropriate firewall rules and TLS configuration
Now, whenever I'm debugging why a service can't connect or setting up a local development environment that mirrors production, I know exactly which layer to manipulate.