Linux Classes
Share This With a Friend  
LINUX CLASSES - CLASSES - DATA MANIPULATION

Linux Awk Command

Can You Handle The Power?

The awk command combines the functions of grep and sed, making it one of the most powerful Unix commands. Using awk, you can substitute words from an input file's lines for words in a template or perform calculations on numbers within a file. (In case you're wondering how awk got such an offbeat name, it's derived from the surnames of the three programmers who invented it.) To use awk, you write a miniature program in a C-like language that transforms each line of the input file. We'll concentrate only on the print function of awk, since that's the most useful and the least confusing of all the things awk can do. The general form of the awk command is

awk <pattern> '{print <stuff>}' <file>

In this case, stuff is going to be some combination of text, special variables that represent each word in the input line, and perhaps a mathematical operator or two. As awk processes each line of the input file, each word on the line is assigned to variables named $1 (the first word), $2 (the second word), and so on. (The variable $0 contains the entire line.)

Let's start with a file, words.data, that contains these lines:

nail hammer wood
pedal foot car
clown pie circus

Now we'll use the print function in awk to plug the words from each input line into a template, like this:

awk '{print "Hit the",$1,"with your",$2}' words.data
Hit the nail with your hammer
Hit the pedal with your foot
Hit the clown with your pie

Say some of the data in your input file is numeric, as in the grades.data file shown here:

Rogers 87 100 95
Lambchop 66 89 76
Barney 12 36 27

You can perform calculations like this:

awk '{print "Avg for",$1,"is",($2+$3+$4)/3}' grades.data
Avg for Rogers is 94
Avg for Lambchop is 77
Avg for Barney is 25

So far, we haven't specified any value for pattern in these examples, but if you want to exclude lines from being processed, you can enter something like this:

awk /^clown/'{print "See the",$1,"at the",$3}' words.data
See the clown at the circus

Here, we told awk to consider only the input lines that start with clown. Note also that there is no space between the pattern and the print specifier. If you put a space there, awk will think the input file is '{print and will not work. But all this is just the tip of the awk iceberg--entire books have been written on this command. If you are a programmer, try the man awk command.

For more information on the awk command, see the awk manual.

Previous Lesson: Search & Replace
Next Lesson: Finding Files

[ RETURN TO INDEX ]


   

Comments - most recent first
(Please feel free to answer questions posted by others!)

Tapani Koponen     (04 Dec 2012, 18:45)
Hello.
It looks like many people are requesting help for quite simple problems. If they knew a little bit more about unix pipe commands, these "problems" would be no problems.
Even short knowledge about GREP, SORT, SED, TR etc would help.
It really is interesting to explore unix/linux command prompt programming.
I recommend!
Korppu
John Edward Bolivar     (10 Aug 2012, 19:46)
hello,

Please could you help me. I am running the following line of instruction:

ipcs | grep rmtadm | awk '{print "ipcrm -" $ 1, $ 2}'

ipcrm-m 1900555
ipcrm-m 589836
ipcrm-m 557069
ipcrm-m 3342350
ipcrm-m 9306127

each of the line that was generated, the need to run as a command. "ipcrm-m 1900555" ipcrm-s 139369.
GUNAVEER PAUL     (27 Mar 2012, 03:55)
Hi Bob,
I would like to know the answers for all the above asked questions.where can i find it?Do i need to register for that
mauludi     (20 Feb 2012, 01:12)
Dear Sir,

I wonder manage two or more data files using awk program . For example
1)
data1:
48 39 58
34 56 78

data2:
6 8 7
2 4 5

could we make mathematical operation data1-data2 , so that the result (data 3):
42 31 51
32 52 73

2) or a bit more complex, for example we have
data1
12 34 56
13 78 76
34 76 45

data 2
58 90 34
76 56 45
34 89 23

print second field/column of file 2 if they are > maximum value of data 1(78), then the result is:
90
89

I hope there is awk program for that purpose, but if there is no ..using other program is okay.

thank you very much in advance


mauludi     (20 Feb 2012, 00:45)
Dear Sir,

I wonder manage two or more data files using awk program . For example
1)
data1:
48 39 58
34 56 78

data2:
6 8 7
2 4 5

could we make mathematical operation data1-data2 , so that the result (data 3):
42 31 51
32 52 73

2) or a bit more complex, for example we have
data1
12 34 56
13 78 76
34 76 45

data 2
58 90 34
76 56 45
34 89 23

print second field/column of file 2 if they are > maximum value of data 1(78), then the result is:
90
89

I hope there is awk program for that purpose, but if there is no ..using other program is okay.

thank you very much in advance
madesh     (17 Feb 2012, 06:13)
my directory contain no.of file and subdirectories,
these files are remove after one month how can we write a command for this secnario .

please give reply this as soon as

Regards
Madosh
9886551213
madesh     (17 Feb 2012, 06:10)
1)please let me know, how to see the first record in zip file. in unix, with out unzip a file.
it's very help full for me.

Nirav     (16 Feb 2012, 00:35)
[crestel@oramed ~]$ mpstat
Linux 2.6.18-194.el5 (oramed.localdomain) 02/16/2012

10:46:48 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
10:46:48 AM all 13.84 0.00 0.46 0.02 0.03 0.19 0.00 85.46 2586.20
[crestel@oramed ~]$ iostat
Linux 2.6.18-194.el5 (oramed.localdomain) 02/16/2012

avg-cpu: %user %nice %system %iowait %steal %idle
13.84 0.00 0.68 0.02 0.00 85.46

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 8.36 10.86 384.18 5267308 186365004
sda1 8.36 10.85 384.18 5265266 186365004
sda2 0.00 0.00 0.00 1618 0


1) I just want only instr/s,time from mpstat.
2) I just want only tps,time from iostat.
Bob Rankin     (10 Jan 2012, 13:16)
@santhiya - Sorry this is not the place to post your homework assignments.
santhiya     (10 Jan 2012, 09:57)
I have a problem:

The data file `inventory-shipped', represents information about shipments during the year. Each record contains the month of the year, the number of green crates shipped, the number of red boxes shipped, the number of orange bags shipped, and the number of blue packages shipped, respectively.
Jan 13 25 15 115
Feb 15 32 24 226
Mar 15 24 34 228
Apr 31 52 63 420
May 16 34 29 208
Jun 31 42 75 492
Jul 24 34 67 436
Aug 15 34 47 316
Sep 13 55 37 277
Oct 29 54 68 525
Nov 20 87 82 577
Dec 17 35 61 401
Perform the following tasks:
1.display the reports header information.
2.calculate and display each quaterly total.
Mahendra     (09 Dec 2011, 10:55)
Hi frds,

Can anyone help me out of my problem

Actually i hv one file(input file as shown below)

Input File:

Name Mahendra
Roll 450
Class ECE

Name Nandu
Roll 460
Class EEE

Name Jyothi
Roll 440
Class CSE
.
.
.
.

I want ouput as shown below using c language

Output File:

Name Roll Class
Mahendra 450 ECE
Nandu 460 EEE
Jyothi 440 CSE
.
.
.
Thanks and Regards
Mahendra
Acein L.     (05 Dec 2011, 22:36)
Nice Tutorial.
Karn Kumar     (05 Dec 2011, 11:23)
awk '/^\/devices/

can you explain me about above syntex with awk /^\/
thanks
Karn Kumar     (05 Dec 2011, 11:21)
root@cspbio1: luxadm -e port | awk '/^\/devices/ { print "luxadm -e dump_map "$1 }' | sh
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 3e0026 0 50060482c465401d 50060482c465401d 0x0 (Disk device)
1 3e1a00 0 210000e08b94a983 200000e08b94a983 0x1f (Unknown Type,Host Bus Adapter)
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 330006 0 50060482d52d11d7 50060482d52d11d7 0x0 (Disk device)
1 330600 0 210100e08bb4a983 200100e08bb4a983 0x1f (Unknown Type,Host Bus Adapter)
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 6f0006 0 500507630f429434 500507630f029434 0x1 (Tape device)
1 6f0007 0 500507630f429435 500507630f029435 0x1 (Tape device)

Can you tell me here "'/^\/"
Ramlal     (27 Nov 2011, 05:14)
Hi,

I have file like this

2 00123456
1 00123049234
5 0019504560
11 00192389423984
13 001893475892375
6 0018937498234

I want to filter this file by reading first digit if greater than 6. For your kind help how to implement in awk
Rosh     (04 Nov 2011, 17:30)
i have two files,a.txt and b.txt, how can i merge the contents of both text files.
e.g :-

a.txt
create table a _bkp as select * from a
create table c _bkp as select * from c
create table a _bkp as select * from a

b.txt
where add='DELHI';
where add='DELHI';
where add='PUNE';


i want merge of both.

create table a _bkp as select * from a where add='DELHI';



Solution: paste a.txt b.txt > c.txt
Rosh     (04 Nov 2011, 17:24)
@vijay: try the paste command for combining the text from two files.
eg: paste file1.txt file2.txt > file3.txt
vijay     (04 Nov 2011, 03:32)
hi..

i want to remove duplicates and print only first occurances.. which r in a file.. want this by using awk command

EG:-
a
a
b
c
b
c
d
d
o/p:-
a
b
c
d

Rosh     (03 Nov 2011, 21:54)
i have the following text in a table
cat
dog
donkey

I want to append this text with sql querry which should look exactly like this

select count(*) from animals where name='cat';
select count(*) from animals where name='dog';
select count(*) from animals where name='donkey';

I tried this

awk 'print "select count(*) from animals where name='"$1"';"}' filename.txt

but got awkward result .
Please help anyone
Vishal     (01 Nov 2011, 08:47)
i have two files,a.txt and b.txt, how can i merge the contents of both text files.
e.g :-

a.txt
create table a _bkp as select * from a
create table c _bkp as select * from c
create table a _bkp as select * from a

b.txt
where add='DELHI';
where add='DELHI';
where add='PUNE';


i want merge of both.

create table a _bkp as select * from a where add='DELHI';
nmosh     (29 Oct 2011, 12:38)
I have a problem.I want to read two file(both is set of numbers and second file has
only 2 numbers) and multiple each number in file1 to first number in file2 and
divided by second number in file2 and write result in other file.how can I do that?
I appriciate if anyone can help me.

nmosh     (29 Oct 2011, 12:27)
I have a problem.I want to read two file(both is set of numbers and second file has only 2 numbers) and multiple each number in file1 to first number in file2 and divided by second number in file2 and write result in other file.how can I do that?
I appriciate if anyone can help me.
Rakesh     (28 Oct 2011, 06:46)
How can i access line by line using awk command

for example i had a file as mentioned below.

1234
4567

how can i use awk command to process the file line by line.

like process first line and perform the operations and then process the second line.
Dipesh.Trivedi     (23 Oct 2011, 01:44)
dear team,
i want to grep last 2 word from a line, which lies on upper line of my main content line.

Please suggets way ahead ASAP.

Regards,
Dipesh A Trivedi
Mustafa     (18 Oct 2011, 09:07)
i have 3 folder the first have subscribers number just and the second have some of subscribers number + data from the first file and the third folder have the other off subscriber i want script or how i do script make search in the first folder and compare the subscriber with subscriber in folder 2 and 3 and when find it output in new folder with data

if u want i can send the folder to u send your email
Saeed PV     (17 Oct 2011, 14:14)
I have file contains huge data as bellow
23,abc,Oct122011
24,cdf,Nov062011
54,sdf,Oct282011
my requierement is to replace the date filed as bellow
23,abc,Oct12011
24,cdf,Nov2011
54,sdf,Oct22011
i.e, if date les than 10 only month and year and if date b/w 11-20 month1year and if date greater than 21 then month2year.
how it will perform, can anybody help me
manoj     (17 Oct 2011, 09:25)
I have the file with
14 columns and 50 rows of data and I need them one of the column to replace with a command output, can anyone provide me a clue to generate a script.

I used this option

#!/bin/sh
# read file line by line
file="/export/home/m/report.output"
while read line
do
mycol1=`echo $line | awk '{print $9}'`
mycol2=`echo $line | awk '{print $11}'`
mycol_new1=`echo <command> <switch> $mycol1`
mycol_new1=`echo <command> <switch> $mycol1`
echo $line | awk '{print $1" ",$2" ",$3" ",$4" ",$5" ",$6" ",$7" ",$8" ",'"$mycol_new1"'" ",$10" ",'"$mycol_new2"'" ",$12" ",$13" ",$14" "}'
done < $file
Ioana     (05 Oct 2011, 10:15)
Hi,

I have a file containing something like this:

123456720110827|339|1|1317117124000|834105|339
123456720110827|339|1|1317108545000|834097|339

I want to replace the first "339" with a text, for instance: "OK", and keep the second occurrence of 339 as it is, so it will look like this:

123456720110827|OK|1|1317117124000|834105|339
123456720110827|OK|1|1317108545000|834097|339

Thanks and regards,
Ioana.
Sudip     (09 Sep 2011, 06:22)
Very good explanation. Thanks!
debasis     (24 Aug 2011, 06:26)
How to find every odd lines of a file by using awk.
Suppose test.txt contains 10 lines.I want only line number 1,3,5...9.
manish     (12 Jul 2011, 10:18)
Hi,

I have an awk operation which does operation on an input file. The input has 4 slashes (\\\\).
eg. input = hakksld\\\\hjakdlld\\\\jklsl
After awk operation, they get converted to single \. Please tell me how can I keep the \\\\ intact and avoid it from being \ ? I am using gsub.
srinivas     (11 Jul 2011, 06:15)
I have one file contains three fields like this
64.204 5.6859 0
64.395 5.2340 0
64.690 5.140 4560
like in this file 3500000 rows are there. what my problem is that in third field i want to remove zeros but the values should be remain corresponding rows.Can suggest me how to do this using by awk.
Nach     (16 Mar 2011, 02:56)
How can be find out last date of the month?

Subtract one day from the first day of the next month.
Rankesh     (03 Mar 2011, 22:31)
Awesome,great explanation..with the help of this I made my own script for auditing.
Prashant Bilaiya     (31 Jan 2011, 03:20)
How can be find out last date of the month?
Amruta     (19 Jan 2011, 17:27)
Great explanation, thanks!!
DS2010     (11 Jan 2011, 00:40)
i need script to "test the fileserver regularly by trying to check the mount point exists"
Tass1     (10 Jan 2011, 11:31)
Hi, I have an file.txt as below :
123456789|value|another|more|last
A23456789|value|another|more|last
B23456789|value|another|more|last

What I would like to do is
a) Read the first 9 chars into a variable
B) Then use that variable to do a DB2 call
C) Append to the file the value received from DB2 to end up with something like this.

123456789|value|another|more|last|DB2Value1
A23456789|value|another|more|last|DB2Value2
B23456789|value|another|more|last|DB2Value3


*we don't need to concern ourselves with the DB2 call, sytax or logic.

Thanks
Amire Dehnadi     (30 Dec 2010, 01:32)
Hi,

I have this output
LA1S1
LA1S2
LA1S3
LA1S4
.
.
.

& I want output only in this form
LA1 S1
LA1 S2
LA1 S3
LA1 S4
.
.
.

can anyone tell me the command to do it?
Vicky     (09 Dec 2010, 02:33)
Hi,

I have this output
-rw-r--r-- 1 root other 10K Dec 9 05:09 15_DEL_DL_EUCD_20101209.txt

& I want output only in this form
10K Dec 9 05:09 DL_EUCD

can anyone tell me the command to do it?
Jeremy     (01 Dec 2010, 19:04)
short and effcient for starting with awk.
Cheers
charitra kocheri     (30 Nov 2010, 08:11)
I need a shell script that copies a list of files from one directory to another and during copy the first character of filename should be changed to uppercase and others to lower case.
Please help me out...!!!
omid     (20 Nov 2010, 05:15)
Excellent as ever. You are a legend man!
edubidu     (03 Nov 2010, 07:29)
hi, how can I search a line and then write below this line?
mastan     (01 Nov 2010, 05:07)
Hi,
I want to execute awk in perl as follows.
-------
#!/usr/bin/perl
print `free -m | tail -1 | awk "{print int((\$3 / \$2) * 100 )}"`;

-----------
Can any one know, pls assist me
Raghu     (26 Oct 2010, 03:59)
Hi,
I want to remove single character from a word by using the AWK , how can we acheive this.

EX: asdfghk from this want to pick only "f"

Satyaveer Arya     (01 Oct 2010, 23:46)
I want to extract some parts of a file using awk command and have to save the extracted matter in another file. How can I do it? Please help me.
Bob Rankin     (24 Sep 2010, 10:55)
@vinoth - Try this

cat test.txt | cut -d'-' -f2- | sed 's/-env//'

It cuts on the first dash, taking everything after it. Then we change "-env" to null -- done!
vinoth     (24 Sep 2010, 09:19)
Is that possible to achieve the below requirement

i have file " test.txt" contains the data

unix-servername-001-env
unix-servername001-env
unix-server-name-001-env

I need to get the server name alone like

servername-001
servername001
server-name-001

Thanks & Regards


Aditya Pratap V.     (20 Sep 2010, 09:46)
Hi,
Thanks for the great introduction to awk! However I have a doubt. Suppose I want to print the first field from words.data and the second field from grades.data using awk, how do I go about?
AtulRaj     (20 Sep 2010, 07:14)
Hey ,
this is easy to understnd the actual purpose of awk .
really nice one.
keep it up..
thank you..
Atulraj..
senthil     (15 Sep 2010, 17:55)
Thanks. It really helped me. Need to explore lot.
gulleytnl     (15 Sep 2010, 12:39)
If I am wanting to use more than one pattern how do I go about doing so?
Punitha     (14 Sep 2010, 08:47)
Hi,
Its really very useful to all.Thanks dude!..
Bob Rankin     (10 Sep 2010, 12:04)
@Jared, If the columns are separated by a single character, something like this will work:

cut -d' ' -f2,5-40

Or leave off the final column:

cut -d' ' -f2,5-
Jared     (09 Sep 2010, 17:19)
One thing i can seem to find anywhere...


How do you easily cut columns from a file?

essentially i want to do something like

cat file.txt | awk '{print $2,$5,$6,$7.... all the way to $40}

but i want to write it simply like

cat file.txt | awk '{print $2,$5-$END};

Anyone know how?
Srikrishnan     (02 Sep 2010, 09:23)
To count the number of CTRL+F characters in a file, we are using the below command, but the files are huge (1milion records +) it takes more time to give the count value. (appr 40 minutes)

awk '{cnt+=gsub(//,"&")}END {print cnt}' Sri.dat

Please help on tuning the performance

Thanks in advance
bhagi     (12 Aug 2010, 01:18)
hi rayen

to print the sum of all the first elements then
awk '{sum+=$1} END {print "sum is",sum}' num.data
i think it is simpler......
balaji     (08 Aug 2010, 09:33)
thanx, it is very easy to understand
Anil kumar     (08 Aug 2010, 00:57)
nice explanation !!.......great work..
bhanu     (04 Aug 2010, 16:12)
How can i print next row first element in a file .
Satish Mongam     (30 Jul 2010, 07:13)
It's very useful and easy to understand.
rayen     (14 Jul 2010, 12:17)
@Bob
That's perfect.
thank you so much again
Bob Rankin     (14 Jul 2010, 12:11)
In this case, the cut command splits on the specified delimiter (space) and returns only the first field (f1).

See the cut command help: //lowfatlinux.com/linux-columns-cut.html
rayen     (14 Jul 2010, 12:02)
It works, thanks a lot Bob
Could you explain what "cut -d' ' -f1" does.
Thanks a lot again
Bob Rankin     (14 Jul 2010, 10:25)
@rayen - Try this:

cut -d' ' -f1 | awk '{sum+=$1} END {print sum}'
rayen     (14 Jul 2010, 09:58)
HI,
How can I have the sum of all the $1
the sum of the first num off each line.

regards
Esra     (05 Jul 2010, 05:59)
thanks, its been very useful
Nick     (20 Jun 2010, 22:19)
Gustavo,

I think the below might do what you want

awk '!/^mytest/''{print $1}' test.txt

gaurav     (17 Jun 2010, 04:54)
Thanks great explanation!
Gustavo     (28 May 2010, 08:15)
HI
how I can remove or edit text lines in a file with a specific content

for example in a file called test.txt I remove the text "mytest"

-----------------
myfirsttest
is not my tsxt
my test is
mytest
thanks
-----------------
Sishui     (10 May 2010, 20:30)
Thnx great explenation!!
surendar     (06 May 2010, 07:50)
its gud . simple and easily understandable
sreedhar     (05 May 2010, 05:23)
Thanks...for simple but effective illustrations
Magesh     (05 May 2010, 00:01)
This article has made me to understand the basics of awk command. Really a usefull article. Thank you sir.
Soji Antony     (25 Apr 2010, 15:02)
thanks......for ur valuable information
vivek koul     (09 Apr 2010, 07:08)
what is a filter in linux
Gautam     (27 Mar 2010, 12:53)
Gr8 snapshot of awk command.
Vinay     (25 Mar 2010, 22:29)
Good Information with examples for better understanding.
mastan     (23 Mar 2010, 01:12)
Gr8 start up of awk
Anu     (18 Mar 2010, 09:14)
at Last i know the command awk Thanks.
Ashwini     (17 Mar 2010, 03:06)
thanks for giving me good information about awk.
Artur     (06 Mar 2010, 07:08)
Thanks!
Stephan Reiner     (03 Mar 2010, 05:27)
Very good intro, thanks for putting it all together!
Rajesh     (02 Feb 2010, 02:54)
Great solution for my search.

I welcome your comments. However... I am puzzled by many people who say "Please send me the Linux tutorial." This website *is* your Linux Tutorial! Read everything here, learn all you can, ask questions if you like. But don't ask me to send what you already have. :-)

NO SPAM! If you post garbage, it will be deleted, and you will be banned.
*Name:
Email:
Notify me about new comments on this page
Hide my email
*Text:
 
 


Ask Bob Rankin - Free Tech Support


Copyright © by - Privacy Policy
All rights reserved - Redistribution is allowed only with permission.