real 0m2.456s user 0m4.440s sys 0m0.460s
How can the user time be greater than the real (wall clock) time?
'real time' is just that - the real amount of time from a wall clock during the execution of my program. User and sys are, however, both CPU time - and there are multiple CPUs in the system. So the theoretical maximum available CPU time is (real time * # CPUs). I never see this in my work because we only have single CPU ARM-based chips.
This can be illustrated with the following script, where we spawn 10 processes in the background and examine the scripts usage statistics:
#!/bin/bash
# Script t.sh
# (c) Martin Jackson 2010
declare -i procs=$1
declare path=$2
f()
{
find $path -maxdepth 1 -type f | xargs md5sum > /dev/null
}
echo "Time for 1 process ..."
time f
echo -e "\n\nTime for $procs processes ..."
time {
declare -i i
for i in `seq 1 $procs`
do f &
done
wait
}
echo -e "\n\n"
# EOF
martin@dodgecity:~/src/test$ uname -a
Linux dodgecity 2.6.32-24-generic #39-Ubuntu SMP Wed Jul 28 05:14:15 UTC 2010 x86_64 GNU/Linux
martin@dodgecity:~/src/test$ ./wait3 ./t.sh 10 /usr/local/src/
Time for 1 process ...
real 0m0.497s
user 0m0.440s
sys 0m0.050s
Time for 10 processes ...
real 0m2.453s
user 0m4.430s
sys 0m0.450s
Process accounting information for 4092:
user time = 4s, 870000us
system time = 0s, 500000us
maximum resident set size = 1496
integral shared memory size = 0
integral unshared data size = 0
integral unshared stack size = 0
page reclaims = 13592
page faults = 0
swaps = 0
block input operations = 0
block output operations = 0
messages sent = 0
messages received = 0
signals received = 0
voluntary context switches = 94
involuntary context switches = 879
If we run this in combination with the wait3 program earlier, we can see that the rusage info returned from the linux kernel (user + sys time) is all in terms of CPU time.