Please emerge qrencode and google-authenticator-libpam-hardened. At the time of writing this, you have to install the MASKED version.
Ensure that your terminal is 90x56 to retain the QR code.
ACCEPT_KEYWORDS="**" emerge -av google-authenticator-libpam-hardened
These are the packages that would be merged, in order:
Calculating dependencies… done!
[ebuild N ] sys-auth/oath-toolkit-2.6.2-r2::gentoo USE="pam -pskc -static-libs -test" 4,196 KiB
[ebuild N ] sys-auth/google-authenticator-libpam-hardened-9999::gentoo USE="qrcode" 0 KiB
Execute google-authenticator as the user
$ google-authenticator
Do you want authentication tokens to be time-based (y/n) y
Your new secret key is: jeronimo!
Your potential verification codes are 012345 012345 012345 012345 012345
Your emergency scratch codes are:
88888888
88888888
88888888
88888888
88888888
Do you want me to update your "/home/<user>/.google_authenticator" file? (y/n) y
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, a new token is generated every 30 seconds by the mobile app.
In order to compensate for possible time-skew between the client and the server,
we allow an extra token before and after the current time. This allows for a
time skew of up to 30 seconds between authentication server and client. If you
experience problems with poor time synchronization, you can increase the window
from its default size of 3 permitted codes (one previous code, the current
code, the next code) to 17 permitted codes (the 8 previous codes, the current
code, and the 8 next codes). This will permit for a time skew of up to 4 minutes
between client and server.
Do you want to do so? (y/n) y
If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting? (y/n) y
Open /etc/pam.d/system-login with your editor and add these 2 lines at the bottom:
Open /etc/sshd/sshd_config in your editor and ensure the following settings are present.
PubkeyAuthentication no
PasswordAuthentication yes
UsePAM yes
ChallengeResponseAuthentication yes
Note: If pubkey is enabled, it will override 2FA completely.
GDM
Editing /etc/pam.d/system-login as above should be enough, just restart the gdm service :) You'll use your normal password and then be prompted for your 2FA code.
QR Codes
You can generate your own QR codes on the cli and output to screen or image file.
Prerequisits: * DNSSEC [usually managed by your domain provider and if you run bind] * PTR [usually setup by your ISP unless you run an authoritative DNS. Implies a static IP] * Exim >4.7
You can put the following into it's own config eg acl_check_spf or place it in the global acl (acl_check_data:).
deny condition = ${if eq{$sender_helo_name}{} {1}}
message = Nice bots say HELO first
# reject messages from senders listed in these DNSBLs
deny dnslists = zen.spamhaus.org
# SPF validation
deny spf = fail : softfail
message = SPF validation failed: \
$sender_host_address is not allowed to send mail from \
${if def:sender_address_domain \
{$sender_address_domain}{$sender_helo_name}}
log_message = SPF validation failed\
${if eq{$spf_result}{softfail} { (softfail)}{}}: \
$sender_host_address is not allowed to send mail from \
${if def:sender_address_domain \
{$sender_address_domain}{$sender_helo_name}}
deny spf = permerror
message = SPF validation failed: \
syntax error in SPF record(s) for \
${if def:sender_address_domain \
{$sender_address_domain}{$sender_helo_name}}
log_message = SPF validation failed (permerror): \
syntax error in SPF record(s) for \
${if def:sender_address_domain \
{$sender_address_domain}{$sender_helo_name}}
defer spf = temperror
message = temporary error during SPF validation; \
please try again later
log_message = SPF validation failed temporary; deferred
# Log SPF none/neutral result
warn spf = none : neutral
log_message = SPF validation none/neutral
# Use the lack of reverse DNS to trigger greylisting. Some people
# even reject for it but that would be a little excessive.
warn condition = ${if eq{$sender_host_name}{} {1}}
set acl_m_greylistreasons = Host $sender_host_address \
lacks reverse DNS\n$acl_m_greylistreasons
accept
# Add an SPF-Received header to the message
add_header = :at_start: $spf_received
logwrite = SPF validation passed
You will also need a TXT record publishing with the registrar and/or internal DNS.
Host name
Type
TTL
Data
example.com
TXT
1 hour
"v=spf1 ip4:xxx.xxx.xxx.xxx ip6::1 -all"
Looking at the record itself, we see that the version indicator, 'v=spf1', is followed by a typical SPF policy: first a list of systems that are authorised to send mail for the domain, then '-all', which means that all other systems are not authorised. The alternative to ending the record with '-all' is to end with '~all'. That is known as a 'soft fail', meaning that messages from non-validating systems should not be blocked, but forwarded with a tag.
DKIM (Domain Keys Identified Mail)
Before the ACL Configuration, place the following:
# # DKIM macros
# # get the sender domain from the outgoing mail
SENDER_DOMAIN = ${if def:h_from:{${lc:${domain:${address:$h_from:}}}}{$qualify_domain}}
# # the key file name will be based on the domain name in the From header
DKIM_KEY_PATH = /etc/exim/keys
DKIM_KEY_FILE = dkim_rsa.private
Put the following under the ACL Configuration.
# This access control list is used to process DKIM status.
acl_check_dkim:
# Skip DKIM checks for all authenticated connections (probably MUAs)
accept
authenticated = *
# Record the current timestamp, in order to delay crappy senders
warn
set acl_m0 = $tod_epoch
# Warn no DKIM
warn
dkim_status = none
set acl_c4 = X-DKIM-Warning: No signature found
# RFC 8301 requires 'permanently failed evaluation' for DKIM signatures signed with 'historic algorithms (currently, rsa-sha1)'
# @SEE: https://www.exim.org/exim-html-current/doc/html/spec_html/ch-dkim_and_spf.html
warn
condition = ${if !def:acl_c4 {true}{false} }
condition = ${if eq {$dkim_verify_status}{pass} }
condition = ${if eq {${length_3:$dkim_algo} }{rsa} }
condition = ${if or { {eq {$dkim_algo}{rsa-sha1} } \
{< {$dkim_key_length}{1024} } } }
set acl_c4 = X-DKIM-Warning: forced DKIM failure (weak hash or short key)
set dkim_verify_status = fail
set dkim_verify_reason = hash too weak or key too short
# RFC6376 requires that verification fail if the From: header is not included in the signature
# @SEE: https://www.exim.org/exim-html-current/doc/html/spec_html/ch-dkim_and_spf.html
warn
condition = ${if !def:acl_c4 {true}{false} }
condition = ${if !inlisti{from}{$dkim_headernames}{true}{false} }
set acl_c4 = X-DKIM-Warning: From: header not included in the \
signature, this defies the purpose of DKIM
# Warn invalid or failed signatures
warn
condition = ${if !def:acl_c4 {true}{false} }
dkim_status = fail:invalid
set acl_c4 = X-DKIM-Warning: verifying signature of $dkim_cur_signer \
failed for $sender_address because $dkim_verify_reason
# Add a DKIM-Received: line to the message header (regardless of DKIM status)
warn
add_header = Received-DKIM: $dkim_verify_status ${if \
def:dkim_cur_signer {($dkim_cur_signer with \
$dkim_algo for $dkim_headernames)} }
# Set up for finalisation: add header and write to log
warn
condition = ${if def:acl_c4 {true}{false} }
add_header = $acl_c4
logwrite = $acl_c4
accept
Again a TXT record needs to be defined.
Host name
Type
TTL
Data
<selector>._domainkey.example.com
TXT
1 hour
"v=DKIM1; k=rsa; p="encrypted rsa key"
To enable DKIM-validating mail servers to validate our digital signatures, the public key from the DKIM key pair generated earlier has to be published in the zone file of the signing domain. The first step is to generate the public key from the DKIM key file:
Note the 'dkim20220615': that is the 'selector', which specifies the key pair used for signing. As you'll see shortly, the selector is also included in the 'DKIM Signature' header, so that when the receiving mail server follows the validation procedure, it knows exactly which public key to request from the DNS.
DMARC (Domain-based Message Authentication, Reporting and Conformance)
acl_check_data:
# DMARC
warn dmarc_status = quarantine
!authenticated = *
log_message = Message from $dmarc_used_domain failed sender's DMARC policy; quarantine
#control = dmarc_enable_forensic
set acl_m_quarantine = 1
# this variable to use in a router/transport
deny dmarc_status = reject
!authenticated = *
message = Message from $dmarc_used_domain failed sender's DMARC policy; reject
#control = dmarc_enable_forensic
warn add_header = :at_start: ${authresults {$primary_hostname}}
You'll also need to generate the key pair. The DKIM key pair is generated as follows:
mkdir /etc/exim/keys/
cd /etc/exim/keys/
openssl genrsa -out dkim_rsa.private 2048
The new file 'dkim_rsa.private' contains the private key, which has to be kept secret. It's therefore important to ensure that the key file access rights provide appropriate security:
Although generating a longer key (4096 bits, rather than 2048 bits) is an option, DKIM signatures remain valid for relatively short periods. They are, after all, used exclusively for delivering messages, which, even in the worst-case scenario, only takes a few days. Restricting the key length to 2048 bits allows DNS traffic to go via the efficient UDP protocol, whereas it would be necessary to switch to the more onerous TCP protocol if longer keys were used.
As usual, you will need to submit a DMARC record to DNS:
If you move to newer generation NVMe-based flash storage, smartctl won’t work anymore. It looks like support for NVMe in Smartmontoolsis coming, and it would be great to get a single tool that supports both SATA and NVMe flash storage.
In the meantime, you can use the nvme tool available from the nvme-cli package. It provides some basic information for NVMe devices.
To get information about the NVMe devices installed:
#nvme list Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S4EWNG0M116212J Samsung SSD 970 EVO Plus 1TB 1 713.05 GB / 1.00 TB 512 B + 0 B 1B2QEXM7
To get SMART information:
#nvme smart-log /dev/nvme0 Smart Log for NVME device:nvme0 namespace-id:ffffffff critical_warning : 0 temperature : 37 C available_spare : 100% available_spare_threshold : 10% percentage_used : 0% data_units_read : 13,820,831 data_units_written : 20,647,263 host_read_commands : 197,831,770 host_write_commands : 499,344,371 controller_busy_time : 1,663 power_cycles : 36 power_on_hours : 8,091 unsafe_shutdowns : 18 media_errors : 0 num_err_log_entries : 0 Warning Temperature Time : 0 Critical Composite Temperature Time : 0 Temperature Sensor 1 : 37 C Temperature Sensor 2 : 34 C Thermal Management T1 Trans Count : 0 Thermal Management T2 Trans Count : 0 Thermal Management T1 Total Time : 0 Thermal Management T2 Total Time : 0
Available Spare. Contains a normalized percentage (0 to 100%) of the remaining spare capacity that is available.
Available Spare Threshold. When the Available Spare capacity falls below the threshold indicated in this field, an asynchronous event completion can occur. The value is indicated as a normalized percentage (0 to 100%).
Percentage Used. Contains a vendor specific estimate of the percentage of the NVM subsystem life used, based on actual usage and the manufacturer’s prediction of NVM life.
(Note: the number can be more than 100% if you’re using storage for longer than its planned life.)
Data Units Read/Data Units Written. This is the number of 512-byte data units that are read/written, but it is measured in an unusual way. The first value corresponds to 1000 of the 512-byte units. So you can multiply this value by 512000 to get value in bytes. It does not include meta-data accesses.
Host Read/Write Commands. The number of commands of the appropriate type issued. Using this value, as well as one below, you can compute the average IO size for “physical” reads and writes.
Controller Busy Time. Time in minutes that the controller was busy servicing commands. This can be used to gauge long-term storage load trends.
Unsafe Shutdowns. The number of times a power loss happened without a shutdown notification being sent. Depending on the NVMe device you’re using, an unsafe shutdown might corrupt user data.
Warning Temperature Time/Critical Temperature Time. The time in minutes a device operated above a warning or critical temperature. It should be zeroes.
Wear_Leveling. This shows how much of the rated cell life was used, as well as the min/max/avg write count for different cells. In this case, it looks like the cells are rated for 1800 writes and about 1100 on average were used
Timed Workload Media Wear. The media wear by the current “workload.” This device allows you to measure some statistics from the time you reset them (called the “workload”) in addition to showing the device lifetime values.
Timed Workload Host Reads. The percentage of IO operations that were reads (since the workload timer was reset).
Thermal Throttle Status. This shows if the device is throttled due to overheating, and when there were throttling events in the past.
Host Bytes Written. The bytes written to the NVMe storage from the system. This unit also is in 32MB values. The scale of these values is not very important, as they are the most helpful for finding the write amplification of your workload. This ratio is measured in writes to NAND and writes to HOST. For this example, the Write Amplification Factor (WAF) is 16185227 / 6405605 = 2.53
echo '01-01-1970 - Some text about yo mama 255.168.001.255 - fo realz' | grep -P "(?:\b(?:\.)?[2][0-5][0-5](?:\.)?\b|\b(?:\.)?[0-1]\d\d(?:\.)?\b|\b(?:\.)?\b\d\d\b(?:\.)?\b|\b(?:\.)?\b\d\b(?:\.)?\b){4}"
1 teaspoon vanilla (or scrapings from a 2 inch piece of vanilla pod)
pecans, toasted and chopped (4oz, 113g)
Method
Preheat oven to 350F/180C with the oven racks set in the top and bottom third of the oven. Prepare 2 baking sheets with parchment or a silicone mat.
Combine oats, flour, brown sugar, salt, and cinnamon, if using, in a large mixing bowl.
Brown the spread in a small saucepan. When the solids become golden brown, transfer the spread to a small bowl and add the maple syrup.
Dissolve the baking soda in the boiling water then add to the spread and maple syrup. It will become slightly foamy. Add vanilla. Stir into dry ingredients. Add pecans and combine thoroughly.
Shape into 1/4 cup size balls and place on parchment covered baking sheets, a few inches apart. Flatten balls slightly.
Bake, one sheet in the top third and one in the lower third of the oven, switching positions half way through the baking time, just until set and golden on the edges but still soft inside. Be careful not to overcook or they will be dry and hard. My baking time was about 16 minutes.
Cool cookies on the baking sheets for 5 minutes then transfer to a cooling rack.
After completely cooled, store in an airtight container.
Tracker is a file indexing and search tool for Linux. Gnome makes use of it for some of its functionality, and as a result, Tracker is installed by default.
The tool speeds up searching and enables full-text search in the Files app, makes the metadata-based batch rename feature to work in the Files app, and enables file and folder search in the Activities Overview. There are some GNOME apps that depend on it too (and don't work at all without it), like Music or Photos. Without Tracker, you'll lose these features, so take this into consideration before completely disabling Tracker.
While it brings a number of useful features to the GNOME desktop, Tracker can also have a performance impact in some cases. These performance issues are supposedly fixed. But there are still users encountering performance issues with Tracker, or users who consider it too resource intensive.
The official way of disabling Tracker on Gnome desktops is to go to Settings -> Search, and turn off the switch from the search settings headerbars (top of the window). There are users however, claiming that this does not disable it, so I decided to try it out, and after turning this option off and a system reboot, tracker status claimed it has more than 100000 files in its index, and it's currently indexing files. But you can give this a try if you wish, and see if it has any impact on your system.
So how to completely disable Tracker, so it no longer indexes any files, and stop having any Tracker process running in the background? You can mask the Tracker systemd services to completely disable it for your current user using this command:
CAUTION: This process may irreversibly delete data. Although most content indexed by Tracker can be safely reindexed, it can’t be assured that this is the case for all data. Be aware that you may be incurring in a data loss situation, proceed at your own risk.
Are you sure you want to proceed? [y|N]: y Found 3 PIDs… Killed process 1357 — “tracker-miner-fs” Killed process 2614 — “tracker-extract” Killed process 42269 — “tracker-store” _g_io_module_get_default: Found default implementation dconf (DConfSettingsBackend) for ‘gsettings-backend’ Setting database locations Checking database directories exist Checking database version Checking whether database files exist Removing all database/storage files Removing database:'/home/cdstealer/.cache/tracker/meta.db' Removing db-locale file:'/home/cdstealer/.cache/tracker/db-locale.txt' Removing journal:'/home/cdstealer/.local/share/tracker/data/tracker-store.journal' Removing db-version file:'/home/cdstealer/.cache/tracker/db-version.txt'
You could opt to uninstall the tracker completely, but it could be problematic. So instead, I disabled as above as I'm the only user. It never caused any real issues, but I don't need that level of indexing as I almost never use gnome search.
My desktop has 32Gb of RAM and was always full of cache generated by Tracker. So after disabling Tracker, I ran the following command (as root) to clear the cache in RAM.
Vegan blueberry frangipane tarts - these delicious little eggless and dairy free blueberry bakewell tarts are a perfect Summer treat! Lovely for dessert or Afternoon tea.Course Dessert Cuisine British, vegan Keyword tarts Prep Time 25 minutes Cook Time 50 minutes Servings 8 people Author Domestic Gothess
Ingredients
Pastry:
200 g (1 +2/3 cup) plain (all-purpose) flour
50 g (1/2 cup) ground almonds
50 g (1/3 cup + 1 Tbsp) icing (powdered) sugar
1/4 tsp salt
150 g (5.3 oz / 1/2 cup + 2 Tbsp) vegan block butter (NOT the spreadable kind. I use Naturli Vegan Block) cold and diced
1 Tbsp cold vodka (or water)
Frangipane:
70 g (2.5 oz / scant 1/3 cup) melted vegan block butter (I use Naturli Vegan Block)
110 g (1/2 cup + 1 Tbsp) caster sugar
40 g (1/3 cup) plain (all-purpose) flour
5 g (1/2 Tbsp) cornflour (cornstarch)
80 ml (1/3 cup) aquafaba or non dairy milk
175 g (1 + 3/4 cups) ground almonds
1/2 tsp baking powder
3/4 tsp almond extract
3/4 tsp vanilla extract
To Finish:
about 8 heaped tsp black cherry jam
a handful of morello cherries
large handful flaked almonds
Instructions
To make the pastry, place the flour, ground almonds, icing sugar and salt in a food processor and pulse to combine.
Add the diced cold butter and blend until it resembles fine breadcrumbs. With the motor running, gradually drizzle in the cold vodka (or water), until the pastry comes together into a ball.
Shape the pastry into a disc, wrap in clingfilm (or an environmentally friendly alternative) and place in the fridge for half an hour.
Divide the chilled pastry into 6 even pieces and roll each one into a ball. Roll each ball out thinly on a floured surface so that it is large enough to line an 8-9cm/3.25-3.5in tart tin.
Carefully lift the pastry into the tin and press it right into the corners and up the sides. Roll over the top with a rolling pin to trim off the excess pastry. Reserve the trimmings.
Repeat with the rest of the balls of pastry then gather together the trimmings, divide in half and roll each into a ball. Roll out as before and line another two tart tins. You should get 8 tarts in total.
Prick the pastry cases all over the base with a fork then place them in the freezer for 20 minutes while you preheat the oven to 180°C/350°F/gas mark 4.
Line each of the frozen pastry cases with a square of tin foil, pressing it right into the corners. Fill each one with baking beans or dried rice then bake for 15 minutes.
Remove the tin foil and beans/rice and return the pastry cases to the oven for 5 minutes then remove and set aside.
To make the frangipane, whisk together the melted vegan butter and the sugar then whisk in the flour and cornflour followed by the aquafaba. Finally, mix in the ground almonds, baking powder and almond and vanilla extracts.
Spread a heaped tsp of blueberry jam over the base of each tart shell then spread a couple of heaped Tbsp of the frangipane over the top, making sure that the jam is fully covered. The frangipane will puff up a little in the oven so don't fill them more than 3/4 full.
Scatter some fresh blueberries over each one, making sure that they aren't too close to the edge as any juice that seeps out can cause the pastry to stick to the tin if it bubbles over.
Finally, scatter over some flaked almonds then bake the tarts for about 30-35 minutes, until nicely browned.
Leave to cool in the tins for 20 minutes before turning out. Store any leftovers in an airtight container for up to 3 days.
If you're running your own DNS server as described here, then you can easily setup your domain zone to block ads, malware, phishing etc etc.
I'll describe the process here.
In named.conf, add the following within the options block:
response-policy {
zone "sinkhole";
};
Next is to download the RPZ (Response Policy Zone) file from a reputable source. For the purpose of this, I'll be using EnergizedProtection.
This is ~25Mb in size and contains over 900,000 entries.
Next I added a new zone to named.conf:
zone "sinkhole" IN {
type master;
file "pri/sinkhole.zone";
notify yes;
allow-update { key "rndc-key"; };
};
Though I discovered that a few lines were too long. So before restarting named, run a check:
# named-checkzone sinkzone /var/bind/pri/sinkhole.zone
dns_master_load: /var/bind/pri/sinkhole.zone:316077: ran out of space
dns_master_load: /var/bind/pri/sinkhole.zone:467504: ran out of space
zone sinkzone/IN: loading from master file /var/bind/pri/test failed: ran out of space
zone sinkzone/IN: not loaded due to errors.
This means that the given line numbers in the zone file are too long.
Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key.
Doing this weakens the security, but only if someone has access to your account or your system has already been compromised.
SSH-AGENT is not a daemon and must be started upon each login which will ask for the passphrase. You *could* NOHUP the command so it stays active between reboots. But I'd avoid doing this.
Start the ssh-agent in the background.
$ eval "$(ssh-agent -s)"
> Agent pid 59566
Add your SSH private key to the ssh-agent. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_rsa in the command with the name of your private key file.