Saturday 25 February 2017

How to check Ubuntu Version

Do you want to check the Ubuntu version you are running in your system? Or are you in a public system and wants to check which version of Ubuntu it is running? This article will tell you how to check Ubuntu version when you don't know of it. There are two ways to do that, either through the terminal or from Unity tools. We will check the both ways. 

Checking Ubuntu Version from the Terminal 


If you want to check the Ubuntu version from the terminal, open one terminal instance and type the following commands : 

lsb_release -a

This command will show you the Description, release and codename of the particular version you are running. 

If you want to see only the release then use the following commands:

lsb_release -r

Similarly, you can use -s for only getting the release no., -c for codename, -d for description.



There is another way of finding the Ubuntu version by checking the release file from /etc folder. Execute the following command in terminal to get the detailed information:

cat /usr/lib/os-release

NAME="ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com"
SUPPORT_URL="http://help.ubuntu.com"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu"
UBUNTU_CODENAME=xenial


You can get the Linux kernel version and more information through the uname command. Type the below command in terminal : 

uname -a 


In the output, you will find the kernel information. 

Checking Ubuntu version through Unity


If you have got Unity in your system, you can check from there too. Open System Settings from the launch pad or through the Dash and click on the Details icon under System menu. This will give the Device Name, Memory, Processor, Graphics, storage and OS type along with the Ubuntu Version. The unity method will provide only the release and not the exact version no. for which you will have to use the terminal only. 



So, next time if you are on your friend's PC or in your practical classrooms which is running Ubuntu and you need to know the Ubuntu version it is running, use any of the above method to check Ubuntu Version. Let me know in the commends if you face any difficulty. 

Thursday 2 February 2017

GitLab went offline after SysAdmin deleted the wrong folder



Online source code repository similar to github, GitLab went offline for more than 12 hours after one of the SysAdmin deleted the wrong folder in production. The service has been restored as of now and the data loss would impact less than 1% of the user base specifically peripheral metadata that was written during a 6 hours window. 

GitLab, in a Google Docs File kept updating their operations. The possible impact according to the docs are : 

Impact



  • ±6 hours of data loss
  • 4613 regular projects, 74 forks, and 350 imports are lost (roughly); 5037 projects in total. Since Git repositories are NOT lost, we can recreate all of the projects whose user/group existed before the data loss, but we cannot restore any of these projects’ issues, etc.
  • ±4979 (so ±5000) comments lost
  • 707 users lost potentially, hard to tell for certain from the Kibana logs
  • Webhooks created before Jan 31st 17:20 were restored, those created after this time are lost


Also, there were several problems encountered during the restoration process.


  • LVM snapshots are by default only taken once every 24 hours. YP happened to run one manually about 6 hours prior to the outage
  • Regular backups seem to also only be taken once per 24 hours, though YP has not yet been able to figure out where they are stored. According to JN these don’t appear to be working, producing files only a few bytes in size.
  • SH: It looks like pg_dump may be failing because PostgreSQL 9.2 binaries are being run instead of 9.6 binaries. This happens because omnibus only uses Pg 9.6 if data/PG_VERSION is set to 9.6, but on workers this file does not exist. As a result it defaults to 9.2, failing silently. No SQL dumps were made as a result. Fog gem may have cleaned out older backups.
  • Disk snapshots in Azure are enabled for the NFS server, but not for the DB servers.
  • The synchronisation process removes webhooks once it has synchronised data to staging. Unless we can pull these from a regular backup from the past 24 hours they will be lost
  • The replication procedure is super fragile, prone to error, relies on a handful of random shell scripts, and is badly documented
  • SH: We learned later the staging DB refresh works by taking a snapshot of the gitlab_replicator directory, prunes the replication configuration, and starts up a separate PostgreSQL server.
  • Our backups to S3 apparently don’t work either: the bucket is empty
  • We don’t have solid alerting/paging for when backups fails, we are seeing this in the dev host too now.


GitLab in it's blog said, "Losing production data is unacceptable, and in a few days we'll post the five why's of why this hapened and a list of measures we will implement". 

Twitter is praising GitLab for the transparency with which the company has handled the things. Everything was updated through the blog / twitter account and the Google Docs. This was a really great way of keeping the users and press updated and GitLab surely deserves a praise for it. Though, I doubt neither GitLab nor any other organisation will even dream of any such worst situation.