How to Install and Use WebApp Information Gatherer(WIG) on Ubuntu 20.04 LTS


In this article, I will take you through the steps to install and use WebApp information gatherer on Ubuntu 20.04 LTS. wig is a web application information gathering tool, which can identify numerous Content Management Systems and other administrative applications. The application fingerprinting is based on checksums and string matching of known files for different versions of CMSes. This results in a score being calculated for each detected CMS and its versions. Each detected CMS is displayed along with the most probable version(s) of it. The score calculation is based on weights and the amount of "hits" for a given checksum. More on GitHub.

How It Works

The default behavior of WebApp Information Gatherer(wig) is to identify a CMS, and exit after version detection of the CMS. This is done to limit the amount of traffic sent to the target server. This behavior can be overwritten by setting the '-a' flag, in which case wig will test all the known fingerprints. As some configurations of applications do not use the default location for files and resources, it is possible to have wig fetch all the static resources it encounters during its scan. This is done with the '-c' option. The '-m' option tests all fingerprints against all fetched URLs, which is helpful if the default location has been changed.


How to Install and Use WebApp Information Gatherer(WIG) on Ubuntu 20.04 LTS

How to Install and Use WebApp Information Gatherer on Ubuntu 20.04 LTS

Also Read: How to Install Zikula CMS on Ubuntu 20.04 LTS [Step by Step]

Step 1: Prerequisites

a) You should have a running Ubuntu 20.04 LTS Server.


b) You should have sudo or root access to run privileged commands.

c) You should have apt or apt-get utility available in your System.


d) You should have Python3 and git installed in your System.


Step 2: Update Your Server

If you have added any of the repository information then it is absolutely necessary to run apt-get update once. This will enable package manager to identify all new URL available after repository addition. So that it can download the required package and its dependencies. Even if there is no change, then apt-get update command will check for all the available updates and install.

root@localhost:~# apt-get update
Hit:1 focal InRelease
Get:2 focal-updates InRelease [114 kB]
Get:3 focal-backports InRelease [101 kB]
Get:4 focal-security InRelease [114 kB]
Get:5 focal-updates/main amd64 Packages [1,344 kB]
Ign:6 focal/mongodb-org/5.0 InRelease
Hit:7 focal/mongodb-org/5.0 Release
Get:8 focal-updates/main i386 Packages [562 kB]
Get:9 focal-updates/main amd64 DEP-11 Metadata [279 kB]
Get:10 focal-updates/universe amd64 Packages [875 kB]
Get:12 focal-updates/universe i386 Packages [647 kB]
Get:13 focal-updates/universe amd64 DEP-11 Metadata [357 kB]


Step 3: Clone Git Repo

In the next step, you can clone the wig repository to your local location using git clone command as shown below. This will create a local directory with the same name as Repo name and save all its contents. You can check more about Git on 17 Popular Git Command Examples on Linux.

root@localhost:~# git clone
Cloning into 'wig'...
remote: Enumerating objects: 4240, done.
remote: Total 4240 (delta 0), reused 0 (delta 0), pack-reused 4240
Receiving objects: 100% (4240/4240), 4.47 MiB | 3.24 MiB/s, done.
Resolving deltas: 100% (2832/2832), done.


Step 4: Install WebApp Information Gatherer(WIG)

Next you need to go to newly created wig directory using cd wig command.

root@localhost:~# cd wig

and, then use python3 install command to install the module.

root@localhost:~/wig# python3 install
running install
running bdist_egg
running egg_info
creating wig.egg-info
writing wig.egg-info/PKG-INFO
writing dependency_links to wig.egg-info/dependency_links.txt
writing top-level names to wig.egg-info/top_level.txt
writing manifest file 'wig.egg-info/SOURCES.txt'
reading manifest file 'wig.egg-info/SOURCES.txt'
reading manifest template ''
writing manifest file 'wig.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/wig
copying wig/ -> build/lib/wig


Step 5: Use WebApp Information Gatherer(WIG)

Once wig module is installed, you can now import this module in your Python script and use. Here we are importing this module in a Python script called to scan an URL using python3 command as shown below. You can download script from GitHub location.

root@localhost:~/wig# python3

wig - WebApp Information Gatherer

Redirected to
Continue? [Y|n]:y
_______________________________ SITE INFO _______________________________
IP                               Title                    Google

________________________________ VERSION ________________________________
Name                            Versions                    Type
gws                                                         Platform
sffe                                                        Platform

_______________________________ SUBDOMAINS ______________________________
Name                           Page Title                   IP       Our Products - Google         Our Products - Google  Our Products - Google    Our Products - Google    The Keyword | Google      The Keyword | Google    Gmail                    Gmail              

______________________________ INTERESTING ______________________________
URL                            Note                         Type
/robots.txt                    robots.txt index             Interesting

Time: 30.0 sec Urls: 601 Fingerprints: 40401

You can find all the arguments and options available with wig using python3 --help command.

root@localhost:~/wig# python3 --help
usage: [-h] [-l INPUT_FILE] [-q] [-n STOP_AFTER] [-a] [-m] [-u] [-d] [-t THREADS] [--no_cache_load] [--no_cache_save] [--cache_dir CACHE_DIR] [-N]
[--verbosity] [--proxy PROXY] [-w OUTPUT_FILE]

WebApp Information Gatherer

positional arguments:
url The url to scan e.g.

optional arguments:
-h, --help show this help message and exit
-l INPUT_FILE File with urls, one per line.
-q Set wig to not prompt for user input during run
-n STOP_AFTER Stop after this amount of CMSs have been detected. Default: 1
-a Do not stop after the first CMS is detected
-m Try harder to find a match without making more requests
-u User-agent to use in the requests
-d Disable the search for subdomains
-t THREADS Number of threads to use
--no_cache_load Do not load cached responses
--no_cache_save Do not save the cache for later use
--cache_dir CACHE_DIR
Set location for cache. Default: ~/.wig_cache - if not possible, CWD is used.
-N Shortcut for --no_cache_load and --no_cache_save
--verbosity, -v Increase verbosity. Use multiple times for more info
--proxy PROXY Tunnel through a proxy (format: localhost:8080)
-w OUTPUT_FILE File to dump results into (JSON)

Leave a Comment