Skip to main content

ยท 5 min read
Ziinc

EmailOctopus is a lovely newsletter site, and I do enjoy the fact that their UI is quite well done and user friendly, and on the technical side they have EmailOctopus Connect, which is allows for lower email costs. They bill by subscriber count though, so for any seriously large subscriber counts it would make more sense to self-host, but I like using them for small projects (like this blog for example) which will never ever see more than a handful of subscribers.

However, EmailOctopus could definitely up their game when it comes to their developer APIs and scripts. Their embeddable script is an absolute pain to work with when it comes to custom websites, especially if you're using a shadow DOM.

<script
async
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
defer
type="text/javascript"
></script>

Let me break this down for you:

  • It loads a script asyncronously. The script is custom generated for the specific form created, as can be seen by the usage of the uuid in the script URL.
  • The script will insert a form as well as load some additional some Google reCaptcha scripts for spam protection. It will also load some Google Fonts and any assets related to the form.
  • By default, it does not come wiht the defer and type attributes. These were added in by me, and would ensure that the browser executes it as JavaScript, and that execution would be deferred until the DOM is fully loaded.
  • It finds a <script> tag with the data-form attribute with that exact UUID and replaces it with the form. It then creates the required DOM nodes within the HTML page.

However, adding in the script directly to a React component would not work:

// ๐Ÿšซ This will not work!
const MyComponent = () => (
<div>
<script
async
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
defer
type="text/javascript"
></script>
</div>
);

Why wouldn't this work?

  • React works with a shadowDOM, and thus there would not be any script tag available on the html at page load. React will mount the component on client load.
  • Even with React Server Side Rendering, the script tag would not be executed because React protects from malicious code that will set raw html inside components. One would need to use __dangerouslySetInnerHtml in order for this to work

This, we need to adjust our React code in Docusaurus to:

  1. execute the script; and then
  2. create the HTML tags at the <script> tag; but
  3. only do it client side;
Why do we want it to be only client side?

Docusaurus will generate both server and client code during the build step. Although it would would actually have some benefits have generated so that there is less JS running on client initial load, there is added complexity in trying to wrangle with the Docusaurus SSR build step, so just leaving it client side is fine as well. It also are no SEO benefits to be gained, so leaving it client side is fine.

For any other React library, this would likely be irrelevant.

Step 1: Create the Formโ€‹

Create the form inside EmailOctopus and obtain the embed script.

Example of form creation

Step 2: Add the wrapped component to your layoutโ€‹

Add in the <Newsletter /> tag to whereever you want to slot your newsletter form at. You can also swizzle one of the layout components, but how to do that is out of scope for this blog post.

import React from "react";
import Newsletter from "@site/src/components/Newsletter";
export default function MyComponent(props) {
return (
<>
<Newsletter />
...
</>
);
}

Step 3: Install React Helmetโ€‹

We'll need some way to load the script in the head of the HTML document. We'll reach for React Helmet in this walkthrough guide, so do the current variation du jour of npm install --save react-helmet.

Step 3: Add in the Newsletter componentโ€‹

For our component to work successfully, we need to create a the file at /src/components/Newsletter.tsx and define the compoennt as such:

//  /src/components/Newsletter.tsx
import React from "react";
import { Helmet } from "react-helmet-async";
import BrowserOnly from "@docusaurus/BrowserOnly";
const Newsletter = () => (
<div
style={{
marginLeft: "auto",
marginRight: "auto",
}}
>
<BrowserOnly>
{() => (
<>
<Helmet>
<script
async
defer
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
type="text/javascript"
></script>
</Helmet>
<script
type="text/javascript"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
></script>
</>
)}
</BrowserOnly>
</div>
);
export default Newsletter;

In this page, there are a few things that are going on:

  1. We set the script to the <Helmet /> component, meaning that this would be placed in the <head> tag of the HTML document. Two additiona attributes are added as well: defer to load this after the main document loads, and type="text/javascript" for completeness.
  2. We also add in the extra <script> tag in the component, with the data-form attribute to let the script identify it as the parent node to insert the form elements.
  3. We also wrap all of this inside of the <BrowserOnly /> component that comes with Docusaurus, which allows us to only run this code when on the client. As these scripts do not affect SEO, it is not necessary to include it in the server side generation.

Step 4: Verify it all worksโ€‹

Now check that it all works on your localhost as well as on production, and now pat yourself on the back!

ยท 5 min read
Ziinc
๐Ÿ‘‹ I'm a dev at Supabase

I work on logging and analytics, and manage the underlying service that Supabase Logs and Logflare. The service do over a billion of requests each day with traffic constantly growing, and these devlog posts talk a bit about my day-to-day open source dev work.

It serves as some insight on what one can expect when working on high performance and high availability software, with real code snippets and PRs to boot. Enjoy!๐Ÿ˜Š

This week, I'm implementing OpenTelemetry, which generates traces of our HTTP requests to Logflare, the underlying analytics server of Supabase. For Elixir, we have the following dependencies that we need to add:

# mix.exs
[
...
{:opentelemetry, "~> 1.3"},
{:opentelemetry_api, "~> 1.2"},
{:opentelemetry_exporter, "~> 1.6"},
{:opentelemetry_phoenix, "~> 1.1"},
{:opentelemetry_cowboy, "~> 0.2"}
]

A quick explanation of each package:

  • :opentelemetry - the underlying core Erlang modules that implement the OpenTelemetry Spec
  • :opentelemetry_api - the Erlang/Elixir API for easy usage of starting custom traces
  • :opentelemetry_exporter - the functionality that hooks into the recorded traces and exports them to somewhere
  • :opentelemetry_phoenix - automatic tracing for the Phoenix framework
  • :opentelemetry_cowboy - automatic tracing for the cowboy webserver

Excluding ingestion and querying routesโ€‹

Logflare handles a ton of ingestion and querying routes every second, and if we were to track every single one of them, we would generate huge amount of traces. This would not be desirable or even useful, because storage costs for these would be quite high and a lot of it would be noise.

What we need is to exlcude off these specific API routes, but record the rest. We don't want to record all, of course, as usually a sample of a large amount of traffic would suffice in giving a good analysis of overall performance.

Of course, when using sampling, we would not have a wholly representative dataset of traces that would represent real-world performance. However, for practical purposes, we would be using the OpenTelemetry traces for optimizing a majority of request happy paths.

In order to do so, I had to implement a custom sampler for OpenTelemetry. The main pull request is here, and I'll break down some parts of the code for easy digestion.

Configuration Adjustmentsโ€‹

We need to make the configuration flexible enough to allow for self-hosting users to increase/decrease the default sampling probability. This also allows us to configure the sampling probability differently for different clusters, such as having higher sampling for our canary cluster.

# runtime.exs
if System.get_env("LOGFLARE_OTEL_ENDPOINT") do
config :logflare, opentelemetry_enabled?: true
config :opentelemetry,
traces_exporter: :otlp
traces_exporter: :otlp,
sampler:
{:parent_based,
%{
root:
{LogflareWeb.OpenTelemetrySampler,
%{
probability:
System.get_env("LOGFLARE_OPEN_TELEMETRY_SAMPLE_RATIO", "0.001")
|> String.to_float()
}}
}}
end

Lines in GitHub

We define a custom sampler that LogflareWeb.OpenTelemetrySampler works on the parent (as specified by :parent_based), and input the :probability option as a map key to the sampler.

We also conditionally start the OpenTelemetry setup code for the Cowboy and Phoenix plugins based on whether the OpenTelemetry exporting endpoint is provided:

# lib/logflare/application.ex
if Application.get_env(:logflare, :opentelemetry_enabled?) do
:opentelemetry_cowboy.setup()
OpentelemetryPhoenix.setup(adapter: :cowboy2)
end

Lines on GitHub

Remember that we set the :opentelemetry_enabled? flag in the runtime.exs above?

Custom Samplerโ€‹

The custom OpenTelemetry sampler works by wrapping the base sampler :otel_sampler_trace_id_ratio_based with our own module. The logic is in two main portions of the module: the setup/1 callback, and the should_sample/7 callback.

In the setup/1 callback, we ensure that we delegate to the :otel_sampler_trace_id_ratio_based.setup/1 with the probability float input. This would generate a map with two keys, the probability as is, and the something called :id_upper_bound.

# lib/logflare/open_telemetry_sampler.ex
@impl :otel_sampler
def setup(opts) do
:otel_sampler_trace_id_ratio_based.setup(opts.probability)
end

How the trace ID sampling works is that each trace has a generated ID, which is a super large integer like 75141356756228984281078696925651880580. A bitwise AND is performed using a hardcoded max trace ID value, and the result of the bitwise AND is then used to compare against the upper bound ID. If it is smaller than the upper bound ID, then it will record the sample, otherwise it will drop it. This is implementation specific and is out of scope for this blog post, but you can read more about the OpenTelemetry spec on the TraceIdRatioBased sampler specification.

Here is the code. For brevity sake, I have omitted the arguments for should_sample/7 the function call and definition:

# lib/logflare/open_telemetry_sampler.ex
@impl :otel_sampler
def should_sample(... ) do
tracestate = Tracer.current_span_ctx(ctx) |> OpenTelemetry.Span.tracestate()

exclude_route? =
case Map.get(attributes, "http.target") do
"/logs" <> _ -> true
"/api/logs" <> _ -> true
"/api/events" <> _ -> true
"/endpoints/query" <> _ -> true
"/api/endpoints/query" <> _ -> true
_ -> false
end

if exclude_route? do
{:drop, [], tracestate}
else
:otel_sampler_trace_id_ratio_based.should_sample(...)
end

Lines on GitHub

Here, because this will be a highly executed code path, we use a case for the path check instead of multiple if-do or a cond-do, because binary pattern matching in Elixir is very performant. Furthermore, binary pattern matching is perfect for our situation because we only need to check for the first part of the HTTP route that is called, instead of all.

The code is relatively simple, it delegates to :otel_sampler_trace_id_ratio_based.should_sample/7 if it is not in one of the bad path. If it is one of the hot paths, we will drop the trace. As this sampler works on the parent, it will drop all child traces as well.

Arguably, we could optimize this even further by re-writing the conditional delegation into mutliple function heads and pattern matching on the attribute argument and doing the binary check within the function guard. As always, premature optimization is the enemy of all software engineers, so I'll defer this refactor until the next time when I need to improve this module.

Wrap upโ€‹

And that is how you implement a custom OpenTelemetry sampler!

ยท 3 min read
Ziinc

It has been a long time since my initial post on using Distillery to manage Ecto migrations on startup, and I'm super happy that the Elixir core team has worked on making deployments a breeze now.

The absolute simplest way to achieve migrations on startup is now as follows:

  1. Write the migrator function using this example here
  2. Add a startup script in the release overlays folder
  3. Add the startup script to your dockerfile CMD

Writing the Migrator Functionโ€‹

This is going to be largely lifted from the Phoenix documentation, but the core aspects that you need are all here:

defmodule MyApp.Release do
@app :my_app

def migrate do
load_app()

for repo <- repos() do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end

defp repos do
Application.fetch_env!(@app, :ecto_repos)
end

defp load_app do
Application.load(@app)
end
end

I have left out the rollback/2 function, but you can include it in if you really think that you will need it. Its more likely that in reality you'll just add in a new migration to fix a bad migration, so it is up to personal preference.

Adding the Startup Scriptโ€‹

With Elixir releases, we now have a nice convenient way to copy our files into our releases automatically, without having to do multiple COPY commands in our dockerfiles. Neat! Anything to save a few lines of code!

Create your startup file here:

# rel/overlays/startup.sh
./my_app eval "MyApp.Release.migrate"
# optionally, start the app
./my_app start

This assumes that we will be setting our working directory to our release root, which we will do in our docker file. If you wish to couple the migrations together with the app startup, you can add the optional ./my_app start portion. However, you can also decouple it so that you don't end up in a bootloop in the event of a bad migration. As always, it really depends on your sitation.

And then in your release configuration:

# mix.exs
releases: [
my_app: [
include_executables_for: [:unix],
# the important part!
overlays: ["rel/overlays"],
applications: [my_app: :permanent]
]
]

This will then copy your files under the re/overlays directory over to the built release.

Add the Startup Script to Dockerfileโ€‹

Let's run the startup script now from our dockerfile as so:

WORKDIR ".../my_app/bin"
CMD ["./startup.sh"]

The three dots are for illustration purposes, adjust the WORKDIR to the actual directory that you copied your release binaries to. If you coupled your app startup together with the startup script, the above CMD will run the migrations and then start up the app.

If you wish to decouple the migrations from the startup, you can do the following:

WORKDIR "..."
CMD ["./startup.sh", "&&", "./my_app", "start" ]

Wrap Upโ€‹

An important mention is that Phoenix now comes with mix phx.gen.release which also comes with a dockerfile option for bootstrapping docker-based release workflows. The migration files are also automatically generated for you too. However, you wouldn't want to use the helper if you aren't doing any Phoenix stuff, and the above example walkthrough will work for any generic Elixir release.

Thanks for reading!

ยท One min read
Ziinc

You can run multiple ExUnit test lines with the following:

mix test my_app/test/path_to_file_test.exs:123:223:333

Each number prefixed with the colon will be run. The above example results in 3 lines being run, with the following result:


Running tests...
==> my_app
Excluding tags: [:test]
Including tags: [line: "123", line: "223", line: "333"]

ยท 4 min read
Ziinc

Pre-requisite concepts:

  • Broadway and multi-process pipelining
  • GenServer calls vs casts

The Problemโ€‹

Logflare's ingestion pipeline for one of Supabase's log sources was falling over and crashing, resulting in a periodic bootloop where the pipeline for that particular source was restarting every few hours.

The Symptomsโ€‹

Each time it restarted, it would cause some events to get dropped and 500 request codes would spike momentarily, however it would automatically recover and continue ingesting events.

However, within a few hours, we would start to see a huge spike in logs, where the pipeline and all related processes would crash and burn, restarting the loop.

log event pattern

Each warning log (indicated by the yellow bars) emitted related to a type error that the Schema process was emitting.

An aside for context: The Schema process in Logflare is responsible for identifying if there are any changes between an incoming log event and the underlying BigQuery table schema. If there is a difference, then we will attempt to patch and update the BigQuery table with the new column and datatype detected.

The Solutionโ€‹

Digging a little deeper in the logs and the pattern of the restarts, it was clear that there was an issue with the ingestion pipeline, specifically at possible places that would cause a restart of the entire supervision tree.

In the Elixir/Erlang world, there are only a few possibilities that would cause the entire supervision tree to terminate:

  1. the supervisor is terminated abnormally through a crash
  2. the supervisor is terminated normally by stopping
  3. the children of the supervisor keep crashing to a point where the supervisor does a fresh restart.

Figuring out the Root Causeโ€‹

But the evidence did not suggest that the Schema process was the main cause of the restart. If so, there would be clear logs of the Schema process exiting and restarting.

Furthermore, the pattern of the log warnings indicated a gradual drop over time of the Schema warnings. This meant that less and less logs were getting ingested over time, as the schema check was performed on each ingest. The warning itself was an indicator of where the underlying issue was, and it was where I started off my investigation.

After closer observation of the code, it was evident that there was a bottleneck when attemtping to execute the Schema checks. It was performing a syncronous block to perform the check on each incoming event. Although any errors returned from the check would not cause the calling processs to crash, the blocking call would cause other processes attempting to perform the check to wait. And because of Broadway's 0 max_restarts behaviour, it would result in the entire pipeline restarting. All it took was to have a slower than normal API request to the BigQuery API, or to have a sudden influx of log events occur.

And because of the sheer incoming volume getting streamed into this particular log source, the bottleneck was getting reached, causing this bootloop pattern to surface.

The fix was straightforward: make the schema checks asyncronous, such that it would not block the calling process. This way, throughput of ingestion would be maintained, and any errors or slowdowns from the Schema process would not bubble up and disrupt the ingestion.

As to why the Schema warnings would slow down over time, that is a mystery that we may never know. As with any sufficiently complex distributed server setup, it is very hard to diagnose such behaviours unless you can replicate it. I have a variety of hypotheses, such as Google APIs rejecting us or rate-limiting due to errors, or memory issues, but these are just thoughts.

The resultant pull requests are here: