Categories
newtools python

Loading unique IP’s in MongoDB

Hi,

So today I played a little bit with the possibility of storing the unique IP addresses in a separate table.

Since I will use a subscription from  ip-api.com, it seems that there is an option to query info by batch processing with a limit of 100 IP’s per payload.

So, at a first glance there are 227200 unique ip’s in my dataset. That will account for 2272 payloads to be queried.

The code more or less looks in the following way:

unique_ip = temp['SourceIP'].unique()
unique_list = [unique_ip[i:i + 100] for i in range(0, len(unique_ip), 100)]
data = []
for i in range(len(unique_list)):
    temp_dict = {}
    temp_dict['id'] = i+1
    temp_dict['payload'] = unique_list[i].tolist()
    data.append(temp_dict)

Once this is constructed you only need to parse the list element by element and insert it to MongoDB using this code:

import pymongo

myclient = pymongo.MongoClient("mongodb://localhost:27017/")
mydb = myclient["mydatabase"]
mycol = mydb["unique_ip"]
for i in range(len(data)): 
    mycol.insert_one(data[i])

Next step will involve taking the collection one by one and serve it to the API endpoint.

Tnx,

Sorin

Categories
python

Start of the traffic project

So, I managed to gather about 1 GB of records from the pfsense installation and grab them from the box (filter.log files that you can find under /var/log).

And I have a list of 16 logs that I need to concatenate.

I had a lot of trouble concatenating it since I tried multiple times to use writelines() method from the file object.

The code that worked for me:

outputcsv = open('//Users//tudorsorin//Downloads//var//log//firewallrepo//filter.csv','w')
f = open(f'//Users//tudorsorin//Downloads//var//log//firewallrepo//filter.concat', 'r')
lines = f.readlines()
for line in lines:
    outputcsv.writelines(",".join(line.split(" ")[0:3])+","+line.split(" ")[-1])
f.close()
outputcsv.close()

The idea is that it’s already in CSV format and all you need to do is to modify the “header” that normally looks like Feb 20 07:58:18 soaretudorhome filterlog[41546]: to something like Feb,20, 07:58:18, and the rest remains the same.

Suprisingly, if you want to load it directly to a dataframe using pd.read_csv and you don’t force a header it works and I have all the data there with NaN in the fields that are not filled.

After this is done, we can filter only traffic that is done over ppoe0 which is the WAN interface, and you can easily do that using temp = df[df[‘pppoe0’] == ‘pppoe0’]

So far so good. I also took a look at a generic pppoe0 line and came to the conclusion that the colums that interest me are [0,1,2,21,22,23,24] which represent the date and source ip, destination ip and ports (source and destination). You can than filter the dataframe by temp = temp.iloc[:, [0,1,2,21, 22, 23, 24]]

So we finally we have a dateframe that we can work with. Now what remains is to change the table header and try to enhance it with extra info.

Cheers

Sorin

Categories
python

Unique value on columns – pandas

Hi,

Today is a short example on cases that have longer columns with spaces.

For example. I have a dataframe that has the following columns:

I have read in some sources that you can use the construction wine_new.[column name].unique() to filter the values.

If you have a one word column, it will work, but if the column is listed as multiple words, you can not use a construct like wine_new.’Page ID’.unique() because it will give a syntax error.

Good, so you try to rename it. why Page ID and not pageid? Ok, that should be easy

wine_new = wine_new.rename(columns={"Page ID": "pageid"}, errors="raise")

And it now looks “better”.

But if you need to keep the column name, you can just as easily use wine_new[‘Page ID’].unique() (If you want to count the number of unique values you can also use wine_new[‘Page ID’].nunique())

There are multiple resources on this topic but the approach is not explained using both of the versions on the majority of them.

Cheers