Anonymous ID: 2e41e8 Aug. 5, 2018, 2:55 p.m. No.2469655   🗄️.is 🔗kun   >>5013 >>7356

I made a neat script that could be useful for Linux users.

 

Many times if you are making a script to process a lot of files it is not so easy to fully use all your possible CPU power (cores->threads).

 

So after few tries I can up with this script that can be used as a base. Easy to modify to your needs and it uses all possible CPU that you allow.

 

for example if you have files to process (for example test thousands of images if they contain steganography), list the files, feed it as standard input to the script and let it spread the processing to all available cores / threads.

 

For example:

  • list files using ls -1 *.png will list file names, pipe it to the script

  • or any other line based info that you can process. the script can spread the items to process in your way to all cores.

 

here is the script:

 

#!/bin/bash

 

input="-"

 

tot=0

 

# THIS gets the available processing units, if 4 cores =8 threads this will get value 8

# depending on your jobs, could try values like units -1, -2, +1, +2… or 2, 3, *4

units=$(nproc –all)

 

cat $input | while read line; do

tot=$(($tot + 1)) # just a counter to display total processed

 

./clean1.sh $line $line.clean & # THIS line will run parallel jobs. Change it to anything you want, remember & at the end

 

jobs=$(jobs -p | wc -l | grep -o '[0-9]*') # currently running job

echo $tot $line "($jobs)"

#echo "Jobs $jobs"

 

while [ $jobs -ge $units ]; do # if jobs at the maximum value, wait

#echo -n '.'

sleep 0.01

jobs=$(jobs -p | wc -l | grep -o '[0-9]*')

done # when done, continue reading input and add more jobs.

 

done

 

wait

 

echo Done $tot