In the previous version LogShark we are bringing back the ability to visualize historic trends view of your logs. You can either write LogShark’s output to a PostgreSQL database server or append data from a new log set to an existing LogShark output.
In this section:
To write LogShark’s output to PostgreSQL database, you need to provide the connection string and ensure the chosen user has the necessary permissions. LogShark will handle creating the database, tables, columns, and inserting all the extracted data from your Tableau logs.
You will need:
CREATE
SELECT
ALTER
INSERT
To direct LogShark to write to PostgreSQL database, you need to update settings. You can do it in the config file or the command line.
To update config settings, navigate to config file <LogShark_install_location>\Config\LogSharkConfig.json
and update the following fields inside the PostgresWriterDatabase
group. Note that each field above supersedes values from previous fields. For example, a value supplied in the DatabaseName
field will override the database name supplied in the ConnectionString
field.
"PostgresWriterDatabase": {
"Host": "localhost",
"DatabaseName": "myDataBase",
"Username": "myUserName",
"Password": "myPassword",
"ConnectionString": "",
"ServiceDatabaseName": "",
"BatchSize": 100,
"ConnectionTimeoutSeconds": 30
We recommend to use the ConnectionString
field, as LogShark will just use the supplied value verbatim. However, if you want to supply the values piecemeal, feel free to use the fields above.
"ConnectionString": "User ID=myUserName;Password=myPassword;Host=localhost;Port=5432;Database=myDataBase;Pooling=true;Min Pool Size=0;Max Pool Size=100;Connection Lifetime=0;",
If you don’t want to store username and password in config file, you can use command line to specify them. See full list of available command parameters below.
LogShark <LogSetLocation> <RunId> --writer postgres --pg-db-user "myUserName" --pg-db-pass "myPassword"
If either a ConnectionString
, Username
, or a Password
are not provided, then LogShark assumes you want to use Integrated Security.
Here is a syntax reference for invoking PostgreSQL command in LogShark.
LogShark <LogSetLocation> <RunId> --writer postgres
Each of the fields for the configuration may be supplied as a command line argument. This can be beneficial if you wish to avoid storing user credentials as plain text inside LogSharkConfig.json. You may mix and match supplying connection information between the config file and command line arguments (for example, supply a connection string with a placeholder password inside the config file, and supply the actual password as a command line argument). For each field, any value supplied as a command line argument supersedes the value supplied in the config file.
Command | Description |
---|---|
-w,–writer |
Select type of output writer to use (i.e. “csv”, “postgres”, “sql”, etc) |
–pg-db-conn-string | Connection string for output database for postgres writer |
–pg-db-host | Output database hostname for postgres writer |
–pg-db-name | Output database name for postgres writer |
–pg-db-user | Output database username for postgres writer |
–pg-db-pass | Output database password for postgres writer |
The data from the run is saved in the PostgreSQL database specified in the config or command line paramaters.
All workbooks are saved in an \<LogShark_run_location>\Output\<RunID>\woorkbooks
folder the directory from where LogShark is run. If the folder doesn’t exist, LogShark creates it. The workbooks in that folder are connected to the Postgres database you specified when you ran LogShark. When you open the workbook, you will be asked to provide your Postgres credentials.
LogShark is an active project, so it’s possible that different versions may have different output schema. Despite this, LogShark will never remove data, columns, tables, schema, or databases. LogShark only ever creates the schema necessary to store data for its current execution. This means it’s possible that one version may create a table and/or column which is unused in subsequent versions. These extra table/columns do not impact LogShark’s ability to extract data. LogShark simply ignores unused schema.
Another way to see historic trends in the same viz is to use an append
command to append data from a new log set to an existing LogShark output. The section below describes how to do it.
Here is a syntax reference for invoking the append command in LogShark.
LogShark <LogSetLocation> <RunId> --append-to <RunId_Of_The_Run_To_Append_To>
The way LogShark normally works is it creates a new output folder and a new empty extract files at the beginning of the run and then pushes data into those empty extracts. When --append-to
parameter is specified, LogShark does the following instead:
--append-to
parameterLogShark appends one time at-a-time. It is possible to combine more than two runs, however you would need to “chain” runs in the right order, i.e. first process log set A, then append log set B to the results of A, then append log set C to the results of B, etc. Here are the steps:
--append-to <run_id_of_the_first_run>
parameter--append-to <run_id_of_the_second_run>
parameter.