24-11#
Grafana Loki: Update from v2.9.5 to v3.2.1#
Verify config as per docs
docker run --rm -t -v ./loki/config.yaml:/config/loki-config.yaml grafana/loki:3.2.1 -config.file=/config/loki-config.yaml -verify-config=true
Results in
CONFIG ERROR: schema v13 is required to store Structured Metadata and use native OTLP ingestion, your schema version is v11. Set `allow_structured_metadata: false` in the `limits_config` section or set the command line argument `-valid
ation.allow-structured-metadata=false` and restart Loki. Then proceed to update to schema v13 or newer before re-enabling this config, search for 'Storage Schema' in the docs for the schema update procedure
CONFIG ERROR: `tsdb` index type is required to store Structured Metadata and use native OTLP ingestion, your index type is `boltdb-shipper` (defined in the `store` parameter of the schema_config). Set `allow_structured_metadata: false`
in the `limits_config` section or set the command line argument `-validation.allow-structured-metadata=false` and restart Loki. Then proceed to update the schema to use index type `tsdb` before re-enabling this config, search for 'Storage
Schema' in the docs for the schema update procedure
https://grafana.com/docs/loki/latest/operations/storage/schema/
Make Loki use schema v13 (instead of v11) with tsdb store (instead of boltdb-shipper) from tomorrow onwards:
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
- from: "2024-11-01"
index:
period: 24h
prefix: loki_ops_index_
object_store: filesystem
schema: v13
store: tsdb
limits_config:
# TODO: remove after schema change
# https://grafana.com/docs/loki/latest/setup/upgrade/
allow_structured_metadata: false
OCSP is dead - long live CRLs#
Let’s Encrypt will end their OCSP Service
TIL:
OCSP requests are sent over plain HTTP
😱
Nothing is Something - Sandy Metz - RailsConf 2015#
RailsConf 2015 - Nothing is Something - YouTube
extract behaviour into objects, e.g.
Orderer
orFormatter
terraform + ansible#
https://www.reddit.com/r/Terraform/comments/17zt5a9/why_is_the_ansible_provider_so_bad/
argues for a two-step approach, i.e. tofu apply
that generates inventory and
variables, followed by ansible-playbook ...
. That sounds very reasonable to
me. The tradeoff: You need ansible to run the whole world, even if only a
single machine changed.
https://withdevo.net/2023/07/12/terraform-ansible-working-together/ argues in favor of the ansible provider. The approach looks feasibly at first glance, but it is the same as above: Run terraform, then run ansible for all hosts.
extract audio from android app#
emulator + adb
log into google
play store + install
Then
adb devices
adb shell pm list packages
adb shell pm path com.example.app
adb pull /data/app/xxxxxxxxxxxxxxxxxxxxxxxxxx/com.example.app.-yyyyyyyyyyyyyyyyyyyyyyyy/foo.apk
sudo apt install apktool
apktool d foo.apk
cd foo/
find . -type f -exec file --mime-type {} + | grep -i 'audio/'
Ansible Cloudsmith Repo#
https://help.cloudsmith.io/docs/integrating-ansible#adding-a-debian-repository
You need what they call DISTRO
and VERSION
.
In Debian-speak that’s ID
and Codename
, e.g.
DISTRO = ID = Ubuntu
VERSION = Codename = jammy
Ansible exposes those via facts as variables, e.g.
{
"ansible_lsb": {
"codename": "jammy",
"description": "Ubuntu 22.04.5 LTS",
"id": "Ubuntu",
"major_release": "22",
"release": "22.04"
},
}
Note that apt sources.list needs lowercase ID (ubuntu
) by convention, e.g.
deb https://dl.cloudsmith.io/public/OWNER/REPOSITORY/deb/DISTRO VERSION main
-->
deb https://dl.cloudsmith.io/public/OWNER/REPOSITORY/deb/ubuntu jammy main
Getting Facts#
target=10.1.1.1
ansible -i ${target}, ${target} -m ansible.builtin.setup
# yeah, i know... "-i ${target}," to fake the inventory 🙄
Using Facts#
- name: install stuff
tasks:
- name: add foo repo
apt_repository:
repo: deb https://dl.cloudsmith.io/public/foo/bar/deb/{{ ansible_lsb['id'] | lower }} {{ ansible_lsb['codename'] }} main
state: present
become: true