Lesson Learned and Personal Thoughts

Migrating from Hanami 1.3 to Rails 7

|

In 2017, I created a URL shortener app using Hanami because I like how Hanami tried to separate its web and API layers into separate apps.

In 2024, I tried to upgrade and this happened.

Why Hanami?

Simply because the grass feels greener at that time. I was looking at a candidate we interviewed and they were the only candidate using Hanami 1.x. To me, it felt super clean compared to Rails 5.x at that time.

Since then I studied Hanami. I like Hanami’s philosophy, where every project is created as API first, web second. Hanami tried to breakdown each of your services inside apps. You can read more here.

That’s when I decided to create my project for URL shortener using Hanami.

What Happened Next?

After 2019, I don’t really code in Ruby anymore. My last exposure to Ruby is with Ruby 2.6.6, which has been deprecated since March 2022. I also got this itch to update my URL shortener, which is turning 7 years old to see how easy it is to maintain. So, I installed Ruby 3.3.5 and starts hacking away.

Well, not really. I got my first roadblock with Hanami.

Hanami 2.0 started with a very different approach than Hanami 1.3. It no longer has apps where we can place different modules. It started to look like Rails to me with a single app.

At first, I tried to upgrade it using several guides I found online. At one point in time, I also started to create a fresh Hanami project. However, I don’t feel as excited as I was before. Probably because the structure reminded me too much of Rails.

Then it got me thinking, what happened to Rails now?

Why Rails Now?

My last exposure with Rails was Rails 5.1, so I’m taking this opportunity to create a fresh Rails project and I really like it.

I discovered all of these amazing little details in fresh Rails project, such as:

  1. Rails is now shipped with Github workflows out of the box.
  2. Rails is now shipped with brakeman (for scanning security vulnerabilities).
  3. Rails is now shipped with Turbo Rails, which makes HTML pages feels as fast as SPA. This also enabled support for custom alert and dialog out of the box.

When I see the github workflow, I decided to convert the project to Rails. To my delight, the github workflow has most of the basic CI needs covered:

  1. It has Brakeman (for scanning security vulnerabilities for Ruby).
  2. It has Importmap (for scanning security vulnerabilities for Javascript dependencies).
  3. It has Rubocop (for style linter).
  4. It runs the test.

Rails 7.2.1 is also shipped with Selenium and Capybara, where you can run system test on your application.

Lesson Learned

In my personal opinion, I think it’s always better to stick with a framework which has been established and has bigger community. On top of that, Rails has excellent convention which hasn’t deviate much ever since its inception.

So for now, I will stick with Rails for my project. Time will tell if I will have a change of mind in probably the next 7 years.

Understanding NodeJS, Kafka, and SSL

|

A few months back I was tasked in upgrading NodeJS version for a project. This project consumes Kafka messages and connecting to Kafka using SSL.

Seems very simple, right? Well..

Why Update?

To set up context, our application was running using NodeJS v8 LTS which ends its support on Dec 2019.

In order to keep up with security patches and bug fixes, it’s important to always use the LTS version of NodeJS. When a vulnerability is discovered in NodeJS which is no longer supported, the cost of fixing the application will be more expensive rather than pre-emptively upgrading the version.

And so, our team decided to go with upgrading NodeJS v8 to NodeJS v12 LTS which will be supported until Apr 2022. This is good enough for our team because it is the stable version at the time of this writing.

The Painful Lesson

Our application is dockerized with RHEL 7 as the underlying OS, so it’s a pretty simple process to upgrade.

As part of the business requirement, our application needs to read the messages from Kafka. For this purpose, it’s using node-rdkafka for connecting to Kafka. The connection is secured through SSL.

When the application tried to run Kafka consumer for the first time, we were getting this error:

Segmentation fault (core dumped)

I have to admit that debugging this issue takes the better part of me, simply because I don’t understand enough on how NodeJS is shipped with its own OpenSSL version.

Understanding NodeJS, Librdkafka and C++

What is segmentation fault? Segmentation fault is an error thrown by program written in C / C++ in order to avoid memory corruption.

In our application’s case, this happened because of the incompatibility between OpenSSL C++ API that is shipped by NodeJS v12 and the OS.

When we are running npm install, node-rdkafka will build librdkafka, which is a C++ library, using OpenSSL libraries that is found in the OS. Now RHEL 7 is generally shipped with OpenSSL 1.0.2 by default.

Meanwhile, NodeJS v12 is shipped using OpenSSL 1.1.1. This normally can be inspected using process.versions

$ node -p process.versions
{
  node: '12.18.1',
  v8: '7.8.279.23-node.38',
  uv: '1.38.0',
  zlib: '1.2.11',
  brotli: '1.0.7',
  ares: '1.16.0',
  modules: '72',
  nghttp2: '1.41.0',
  napi: '6',
  llhttp: '2.0.4',
  http_parser: '2.9.3',
  openssl: '1.1.1g',
  cldr: '37.0',
  icu: '67.1',
  tz: '2019c',
  unicode: '13.0'
}

Solution

Now that we know the problem is between the two C++ ibraries, we need to ensure that librdkafka is also compiled with the same OpenSSL version as node v12. Upgrading OpenSSL version in our docker image works and now the application is able to connect to Kafka using SSL.

References

Bulk Upload using Ruby Elasticsearch Gem

|

I was tasked with doing some data transforming job which needs to be stored in an Elasticsearch cluster. The gist of the job is very simple:

  1. Do a massive query which will return a CSV file roughly 3GB.
  2. Import the CSV file in Elasticsearch.
  3. Users should be able to query the data in Elasticsearch.

Seems very simple, right? So I am using the elasticsearch-ruby gem in order to make life easier for the rest of us.

Connecting to AWS Elasticsearch VPC locally

|

Generally it is better to have an Elasticsearch cluster in AWS secured using VPC connection instead of open internet, instead of relying on bucket policy when accessing them.

However, connecting them to our localhost is a bit complicated because we need to port-forward the connection. As a pre-requisite, we would need to connect through an EC2 instance in the same VPC with our Elasticsearch cluster.