This error occurs when the kubectl client does not have the correct certificates to interact with the Kubernetes API Server. Every certificate has an expiry date. Kubernetes has mechanisms to update the certificate automatically.
I was getting the error “You must be logged in to the server (Unauthorized)“. While executing the kubectl command. The command was working perfectly in the cluster before few hours and there was no modifications happened in the cluster.
You can use the following command to check the expiry details of the certificates used internally in the Kubernetes cluster. If the certificates are expired, we need to renew the certificates.
kubeadm alpha certs check-expiration
A sample response is given below.

From the above screenshot, In the above screenshot certificates will expire in 6 hours. If you see an expiry of the certificates, you can renew the certificates by issuing the following command.
Note: Take the back up of all the old certs and config file as a safety precaution
kubeadm alpha certs renew all
The sample response while executing the above command is given below.

Now you can check the expiry date of the certificates and verify whether everything got updated.
kubeadm alpha certs check-expiration
Also execute some kubectl command to ensure that the kubectl got the right config file to interact with the cluster.
Sample commands are given below.
kubectl get pods --all-namespaces
kubectl get nodes
If you are again getting the error You must be logged in to the server (Unauthorized), try the following hack.
Login to the master node, copy the config file /etc/kubernetes/admin.conf and paste it in the location $HOME/.kube/config.
The command is given below.
cp /etc/kubernetes/admin.conf $HOME/.kube/config
After doing this, try executing kubectl commands. You can copy this config file to any node where you have kubectl
Thanks for the great article Amal. However Flatcar Linux, CoreOS etc don’t have neither kubeadm nor /etc/kubernetes/admin.conf
Thank you for the valuable feedback Joro. I will update the article based on this.
Amal, I have a slightly different problem as I am getting this error when all the certificates are up to date and correct. Here is what I am doing and maybe you can help.
1. Copy my config from my internal bare metal cluster into my .kube folder
2. File is named “config” and no env variable KUBECONFIG is set
3. Connection works fine
4. Rename “config” file and add it to KUBECONFIG variable along with another working config file
5. I get this error now when connecting to this cluster using this context. The other config file associated with a different cluster continues to work
Any ideas why this one config file displays this error when the name of the file is changed and KUBECONFIG is set?
Looks like something wrong with the config file. Can you first verify whether the copied config file is a valid one. You can check this by executing kubectl commands by passing the config file directly with the command.
For example:
kubectl get pods –kubeconfig=config
Yeah. it’s a valid file. In fact the problem is strange to me as it appears that kubectl only shows this error on the second config file in my environment variable. i.e.
KUBECONFIG=~/.kube/file1:~/.kube/file2
File 1 works and file 2 shows the error message. If I switch them around, file 2 will work and file 1 will display the error (changing contexts of course)
I think the problem might be that both config files use the same user. They are both from two different bare metal internal clusters and both have kubernetes-admin as the user.
Dang. Turned out to be the user name. Just needed to make them unique and things are working. Thanks!
https://stackoverflow.com/questions/60655653/use-multiple-contexts-with-same-user-name-in-kubectl-config
Cool. Thanks for the update