Cyberithub

How to Install and Use WebApp Information Gatherer(WIG) on Ubuntu 20.04 LTS

Advertisements

In this article, I will take you through the steps to install and use WebApp information gatherer on Ubuntu 20.04 LTS. wig is a web application information gathering tool, which can identify numerous Content Management Systems and other administrative applications. The application fingerprinting is based on checksums and string matching of known files for different versions of CMSes. This results in a score being calculated for each detected CMS and its versions. Each detected CMS is displayed along with the most probable version(s) of it. The score calculation is based on weights and the amount of "hits" for a given checksum. More on GitHub.

How It Works

The default behavior of WebApp Information Gatherer(wig) is to identify a CMS, and exit after version detection of the CMS. This is done to limit the amount of traffic sent to the target server. This behavior can be overwritten by setting the '-a' flag, in which case wig will test all the known fingerprints. As some configurations of applications do not use the default location for files and resources, it is possible to have wig fetch all the static resources it encounters during its scan. This is done with the '-c' option. The '-m' option tests all fingerprints against all fetched URLs, which is helpful if the default location has been changed.

Advertisements

How to Install and Use WebApp Information Gatherer(WIG) on Ubuntu 20.04 LTS

How to Install and Use WebApp Information Gatherer on Ubuntu 20.04 LTS

Also Read: How to Install Zikula CMS on Ubuntu 20.04 LTS [Step by Step]

Step 1: Prerequisites

a) You should have a running Ubuntu 20.04 LTS Server.

Advertisements

b) You should have sudo or root access to run privileged commands.

c) You should have apt or apt-get utility available in your System.

Advertisements

d) You should have Python3 and git installed in your System.

 

Step 2: Update Your Server

If you have added any of the repository information then it is absolutely necessary to run apt-get update once. This will enable package manager to identify all new URL available after repository addition. So that it can download the required package and its dependencies. Even if there is no change, then apt-get update command will check for all the available updates and install.

Advertisements
root@localhost:~# apt-get update
Hit:1 http://in.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://in.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:4 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:5 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1,344 kB]
Ign:6 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 InRelease
Hit:7 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 Release
Get:8 http://in.archive.ubuntu.com/ubuntu focal-updates/main i386 Packages [562 kB]
Get:9 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 DEP-11 Metadata [279 kB]
Get:10 http://in.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [875 kB]
Get:12 http://in.archive.ubuntu.com/ubuntu focal-updates/universe i386 Packages [647 kB]
Get:13 http://in.archive.ubuntu.com/ubuntu focal-updates/universe amd64 DEP-11 Metadata [357 kB]

 

Step 3: Clone Git Repo

In the next step, you can clone the wig repository to your local location using git clone https://github.com/jekyc/wig.git command as shown below. This will create a local directory with the same name as Repo name and save all its contents. You can check more about Git on 17 Popular Git Command Examples on Linux.

root@localhost:~# git clone https://github.com/jekyc/wig.git
Cloning into 'wig'...
remote: Enumerating objects: 4240, done.
remote: Total 4240 (delta 0), reused 0 (delta 0), pack-reused 4240
Receiving objects: 100% (4240/4240), 4.47 MiB | 3.24 MiB/s, done.
Resolving deltas: 100% (2832/2832), done.

 

Step 4: Install WebApp Information Gatherer(WIG)

Next you need to go to newly created wig directory using cd wig command.

root@localhost:~# cd wig

and, then use python3 setup.py install command to install the module.

root@localhost:~/wig# python3 setup.py install
running install
running bdist_egg
running egg_info
creating wig.egg-info
writing wig.egg-info/PKG-INFO
writing dependency_links to wig.egg-info/dependency_links.txt
writing top-level names to wig.egg-info/top_level.txt
writing manifest file 'wig.egg-info/SOURCES.txt'
reading manifest file 'wig.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'wig.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/wig
copying wig/__init__.py -> build/lib/wig
........................................

 

Step 5: Use WebApp Information Gatherer(WIG)

Once wig module is installed, you can now import this module in your Python script and use. Here we are importing this module in a Python script called wig.py to scan an URL google.com using python3 wig.py google.com command as shown below. You can download wig.py script from GitHub location.

root@localhost:~/wig# python3 wig.py google.com

wig - WebApp Information Gatherer


Redirected to http://www.google.com
Continue? [Y|n]:y
Scanning http://www.google.com...
_______________________________ SITE INFO _______________________________
IP                               Title
216.58.196.68                    Google

________________________________ VERSION ________________________________
Name                            Versions                    Type
gws                                                         Platform
sffe                                                        Platform

_______________________________ SUBDOMAINS ______________________________
Name                           Page Title                   IP
https://m.google.com:443       Our Products - Google        172.217.174.235
http://m.google.com:80         Our Products - Google        172.217.174.235
https://mobile.google.com:443  Our Products - Google        172.217.174.235
http://mobile.google.com:80    Our Products - Google        172.217.174.235
https://blog.google.com:443    The Keyword | Google         172.217.167.169
http://blog.google.com:80      The Keyword | Google         172.217.167.169
https://mail.google.com:443    Gmail                        142.250.183.197
http://mail.google.com:80      Gmail                        142.250.183.197

______________________________ INTERESTING ______________________________
URL                            Note                         Type
/robots.txt                    robots.txt index             Interesting

_________________________________________________________________________
Time: 30.0 sec Urls: 601 Fingerprints: 40401

You can find all the arguments and options available with wig using python3 wig.py --help command.

root@localhost:~/wig# python3 wig.py --help
usage: wig.py [-h] [-l INPUT_FILE] [-q] [-n STOP_AFTER] [-a] [-m] [-u] [-d] [-t THREADS] [--no_cache_load] [--no_cache_save] [--cache_dir CACHE_DIR] [-N]
[--verbosity] [--proxy PROXY] [-w OUTPUT_FILE]
[url]

WebApp Information Gatherer

positional arguments:
url The url to scan e.g. http://example.com

optional arguments:
-h, --help show this help message and exit
-l INPUT_FILE File with urls, one per line.
-q Set wig to not prompt for user input during run
-n STOP_AFTER Stop after this amount of CMSs have been detected. Default: 1
-a Do not stop after the first CMS is detected
-m Try harder to find a match without making more requests
-u User-agent to use in the requests
-d Disable the search for subdomains
-t THREADS Number of threads to use
--no_cache_load Do not load cached responses
--no_cache_save Do not save the cache for later use
--cache_dir CACHE_DIR
Set location for cache. Default: ~/.wig_cache - if not possible, CWD is used.
-N Shortcut for --no_cache_load and --no_cache_save
--verbosity, -v Increase verbosity. Use multiple times for more info
--proxy PROXY Tunnel through a proxy (format: localhost:8080)
-w OUTPUT_FILE File to dump results into (JSON)

Leave a Comment