Code Your Way to the Top: A Guide to Crafting the Perfect Software Engineer Resume

Are you tired of sending out resumes and getting no responses? It might be time to reevaluate your resume game. From weak and passive verbs to unnecessary jargon and cliches, there are a lot of common mistakes that I see job seekers make on their resumes. But don’t worry, I’ve got you covered. In this post, I’ve provided tips on how to make your resume stand out, including using strong action verbs, quantifying your achievements, and avoiding common pitfalls. So if you want to increase your chances of landing that dream job, keep reading!

As a hiring manager, I can’t tell you how many times I’ve been stuck reading a never-ending, rambling resume that just goes on and on. It’s like the document is trying to test my endurance or something. Trust me, no one has time for that. Keep your resume concise and to the point. Two pages or less is the sweet spot. Not only will it make your life easier (less time spent writing and editing), it will also show off your editing skills. Plus, let’s be real, no one wants to be the resume equivalent of a drunk, rambling uncle at Thanksgiving dinner. Keep it short and sweet, just like a good joke. And if you’re struggling to condense your experiences, just remember: it’s not the length of the resume, it’s how you use it.

And for goodness sake, please don’t get too fancy with your resume design. Trust me, no one wants to read a resume that looks like a circus poster. Keep the fonts and formatting simple and professional, and save the graphics and images for your LinkedIn profile.

It’s generally a good idea to avoid weak and passive verbs in a resume because they can make your writing sound less assertive and confident. Weak verbs do not convey a strong sense of action, and they can make your writing sound vague or passive. For example, “assisted with” is a weak verb that does not convey a strong sense of what you did. It’s much better to use a stronger, more specific verb that clearly conveys your contribution.

To really sell the “action words” use them in conjunction with quantifiable metrics to demonstrate the impact of your achievements. For example: “By implementing a new marketing strategy, I increased website traffic by 25% and generated $50,000 in new revenue for the company.”

Here is a sample of the best 15 action verbs that are particularly effective for improving the quality of a resume:

“Achieved, improved, managed, enhanced, increased, developed, expanded, improved, generated, reduced, transformed, streamlined, pioneered, innovated, and created.”

Here are some alternative phrases you can use instead of “duties included,” “responsible for,” “served as,” or “actions encompassed”:

Handled, performed, managed, led, coordinated, overseen, supervised, facilitated, and assisted.”

Using more specific and active language can help to make your resume more impactful and convey a strong sense of your capabilities and accomplishments.

Here are a few non-passive phrases that are particularly well-suited for technical positions:

  • Designed and implemented
  • Developed and maintained
  • Analyzed and resolved
  • Created and tested
  • Configured and optimized
  • Troubleshoot and repaired
  • Upgraded and maintained
  • Created and delivered
  • Utilized and supported
  • Developed and deployed

These phrases convey a strong sense of action and responsibility, and they demonstrate your ability to take charge of technical tasks and projects. They can be especially effective in a resume for a technical position because they show that you have the skills and knowledge necessary to successfully complete complex projects.

Also, I want to get a sense of who you are, and what makes you tick, so use a summary or objective statement to introduce yourself and your goals. This can be a few sentences at the top of your resume that give a brief overview of your background and what you hope to achieve in your career. I want to see passion. If I cant get a sense of your personality and why you are in this profession, don’t expect a call.

In addition to professional experiences, a candidate’s hobbies, interests, and leadership experiences can also provide valuable insights into their motivations and values. For example, someone who has volunteered with a non-profit organization or taken on leadership roles in team projects may be driven by a desire to make a positive impact and contribute to the greater good. These experiences can also demonstrate the candidate’s ability to work with others and contribute to a common goal. Including this information in a resume can help to give me a more complete understanding of the candidate and their potential fit with the organization.

Similarly, it’s also a good idea to stay away from business jargon or cliches in a resume because they can make your writing sound insincere or overused. It’s important to use language that is clear and straightforward, and that accurately conveys your skills and experiences. Jargon and cliches can make your writing sound artificial and can be confusing to readers who are not familiar with them.

Nothing annoys me more than a resume littered with the latest generation of techno-babble:

“Our decentralized, blockchain-based web 3 platform leverages artificial intelligence and machine learning to enable friction-less, stateless-less data interoperability, resulting in a paradigm shift in the way we consume and monetize digital content.”

Garbage! A “buzzword bingo player” or a “buzzword spewer” someone who is more interested in using flashy, technical-sounding language to impress others, rather than having a genuine understanding of what the words mean. This kind of behavior can be seen as superficial or insincere, and it can be off-putting to people who value genuine knowledge and understanding. If you DO fully understand the language — you run the risk of looking like a brilliant jerk — but it doesn’t matter because I don’t want to work with either.

Last of all, and this goes for the interview as well (and its hard for some and easy for others) —demonstrate humility.

Top 10 Most Under-rated Rails Tricks Most People Don’t Use

Rails’ conventions become so instinctual in engineers as they develop that they oftentimes copy the same code and design patterns not realising that there are a whole bunch of ways to do things that are either more performant, or cleaner and easier to maintain.

If you look a bit harder you can uncover a world of “better” solutions to do many commonly needed functions.

ActiveRecord’s #missing and #associated

Did you know you can easily query models that have either have zero or at least one of the specified association?

# (6.1) Gets all posts that have no comments and no tags. 
Post.where.missing(:comments, :tags)

# (7.0) Gets all posts that have at least 1 comment
Post.where.associated(:comments)

ActiveRecord greater and less than using infinity range

As long as you are using Rails 5.0+ and Ruby 2.6, you can use the (infinity) range object for less than and greater than in an ActiveRecord relation?

# (5.0) Returns all users created in the last day.
User.where(created_at: 1.day.ago..)

# (5.0) Returns all users with less than 10 login attempts.
User.where(login_attempts: ..10)

ActionPack Variant to dynamically render different layouts

This one blew my mind!

Sometimes you want to use a different view layout, for example regular users use one layout, and admins use another. Request variants do exactly that!

# (4.1) ActionPack variants
class DashboardController < ApplicationController
def show
request.variant = current_user.admin? ? :admin : :regular
end
end

# If admin, uses: app/view/dashboards/show.html+admin.erb
# If not, uses: app/view/dashboards/show.html+regular.erb

Using #scoped and #none

Sometimes you need to either return an ActiveRecord relation object that represents all the records of a model, or perhaps no records at all. This can be done with the #scoped and #none. Historically, ‘none’ has been simulated by returning an empty array, but using an array causes problems, and means that you cannot guarantee that the returned value (for example with the sample below) will respond to the same method signatures that the other pathways will do. This is just better object-oriented design.

def search(query)
case query
when :all
scoped
when :published
where(:published => true)
when :unpublished
where(:published => false)
else
none
end
end

Why is my query slow? Use #to_sqland #explain

Sometimes ActiveRecord relations do not always act the way you expect them to, or sometimes, you need to verify that the database queries are using the correct indices. Check that your hard-fought struggles with your ActiveRecord relations are generating the SQL (and database behaviour you envision).

# Output the SQL the relation will generate.
Post.joins(:comments).to_sql

# Output the database explain text for the query.
Post.joins(:comments).explain

Filtering ActiveRecord results with merge

I really cannot believe that this isn’t covered (or at least if it is, I haven’t seen it) in any of the default documentation, nor in any book or guide I’ve. It is completely bewildering since its an incredibly common usage pattern and hardly anyone knows about it. It lets you join onto a named scope, filtered by the result of that named scope.

class Post < ApplicationRecord
  # ...

  # Returns all the posts that have unread comments.
  def self.with_unread_comments
    joins(:comments).merge(Comment.unread)
  end
end

Multiple variable assignment using the splat * operator

One thing that everyone should know about is using the splat operator on objects other than arrays.

match, text, number = *"Something 981".match(/([A-z]*) ([0-9]*)/)

# match = "Something 981"
# text = "Something"
# number = 981

other examples include:

a, b, c = *('A'..'Z')

Job = Struct.new(:name, :occupation)
tom = Job.new("Tom", "Developer")
name, occupation = *tom

(thanks to slack-overflow community wiki for this one)

Asynchronous Querying

Rails 7.0 introduced #load_async that loads ActiveRecord relations (queries) in background threads. This can dramatically improve performance which you need to load several unrelated queries in the controller.

def PostsController
  def index
    @posts = Post.load_async
    @categories = Category.load_async
  end
end

In Rails 6.0 (or less) the queries above took 200ms each, the controller would take 400ms to execute these serially. With #load_async in Rails 7 the equivalent code would only take as long as the longest query!

Stream Generated Files from Controller Actions

send_stream in Rails 7.0 lets you stream data to the client from the controller for data being generated on the fly. Previously this had to be buffered (or stored in a tempfile) and then used send_data to transmit the data. This will be awesome for SSE and long-polling options in Rails.

send_stream(filename: "subscribers.csv") do |stream|
  stream.write "email_address,updated_at\n"
 
  @subscribers.find_each do |subscriber|
    stream.write "#{subscriber.email_address},#{subscriber.updated_at}\n"
  end
end

find_each is an oldie but a goodie that is massively under-used!

Finally, stop using #each to iterate over large numbers of records. When you use #each Active Record runs the entire query, instantiates ALL the objects needed for the query and populates their attributes from the result set. If the query returns a LOT of data, this process is slow and more importantly, uses a tonne of memory. Instead when you know there are going to be 100s or 1000s of results, use #find_each to only load the object in batches of (by default) 1000 records (you can change this on each usage). Here is an example:

Book.where(:published => true).find_each do |book|
puts "Do something with #{book.title} here!"
end

My Leadership Vision

Good and performant leadership is a critical component of any successful endeavour. I have been a leader most of my career; however, only in the last 12 months have I come to appreciate what being a leader is. A good leader is an expert at aligning their thoughts, behaviours, and morals with their authentic being. I believe it is through the conscious and cognitive process of aligning your being with the needs of your followers that dominates the efficacy of your leadership. I have identified four statements that I believe are the key to keeping oneself aligned with the values and behaviours of a truly Transformational and Servant Leader.

Vision is everything.

To lack vision is to invite chaos. Followers with a poorly defined vision cannot know where they should go. Inspirational vision underpins almost every aspect of good leadership. Without it, your followers are blind and cannot function effectively. No vision means no alignment, collaboration, strategy, or future. A leader must be inspirational and ensure that the followers have an articulated and purposeful vision supported by conversation, action, and positive role-modelling. Followers must have something to rally behind and believe in; failing to do so is a grave injustice.

Emotional Labour is everything else.

It is equally vital that a good leader role-models mindfulness and managing their emotional labour. Great leaders are aware of their own emotions and the emotions of their colleagues and can temper them before they become destructive. Failure to regulate one’s emotions can lead to a breach of trust and may undermine the faith your followers have in you and your vision. Leaders help their followers do the same.

Delegating tasks creates followers but delegating authority creates leaders.

Leaders have a duty of care to their followers. Part of that duty is to create an environment where they get to exercise and practice their leadership traits and skills. An employee is more likely to emotionally invest in the leader’s vision if they feel as though they are empowered and trusted to excel. By allowing followers the freedom to become authorities in their work, you create an environment much more favourable to innovation and success. As followers become leaders themselves, supported and armed with a clear vision, they will inspire others and become powerful agents of positive change.

You can control, or you can lead, but you can’t do both.

You can leverage your positional power over a follower to get them to do something the way you want it done, but to do so is not leadership. Effective leadership means trusting your followers to know your reasons for your decisions. If you cannot explain your rationale to your followers, they may lose faith in their purpose and your competency to lead them. Trust that your followers want to do what is in the collective best interest, and you must not do anything to undermine the trust they’ve put in you. Leadership is not demonstrated by wielding power but by increasing the power of those following

Quick and Simple HTTP Server Using Python

You can use Python as a quick way to host static content. On Windows, there are many options for running Python, I’ve personally used Cygwin and ActivePython.

To use Python as a simple HTTP server just change your working directory to the folder with your static content and type python -m SimpleHTTPServer 8000, everything in the directory will be available at http:/localhost:8000/

Edit: To do this with Python, 3.4.1 (and probably other versions of Python 3), use the http.server module:

python -m http.server 8000

or possibly:

python3 -m http.server 8000

on Windows:

py -m http.server 8000

External API Authentication in Rails using Devise and JWT Tokens

Scenario

We want to create an API for our Rails application which requires a user to first authenticate with their username and password to verify their identity, but subsequently, we wish to identify the user using a JWT token.

So what is a JWT anyway?

The advantage of using JWTs (or JSON Web Tokens) is that they are an industry standard (RFC 7519) method for representing claims securely between two parties. They are trustworthy because they are digitally signed and secure.JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token.

A typical JSON Web Token looks something like this (taken from the jwt.io website).

Our solution will focus on an HMAC solution. What’s more, the secret (which is used to encrypt the token) will change each and every time a user authenticates with the server. In this way, if the JWT does get into the wrong hands (you are using SSL, aren’t you?) simply re-authenticating will make the previous secret invalid and the JWT would become useless. However, you should still use safeguards to protect the token as much as possible and do not keep them longer than required.

After the secret is generated a token is generated and returned to the user, the client can send it back whenever accessing a protected resource or route (typically) in the Authorization header using the Bearer schema: Authorization: Bearer <token>

Because they’re small, portable and reliable; JWTs are becoming extremely popular for web-based API authentication and rapidly becoming the industry standard.

But what happens if the JWT is intercepted and stolen?

The short answer is that it’s really bad. But what makes it less bad than a username and password being compromised is that it can be immediately invalidated without requiring (or impacting) on directly on the user. Also, the token itself is only useful to an attacker for a limited time. Once the token expires, it becomes useless.

However, it should be noted that there are circumstances where a stolen JWT can actually be worse. This largely depends on how the token was obtained in the first place. If an attacker has successfully executed a man-in-the-middle attack, the hacker may be able to simply obtain a new token whenever required.

Basically, as always there is no silver bullet when it comes to security concerns, and you should always follow best practices and take your own server security into account. While this has worked safely for some time in my applications; this code has been modified and generalized to make this tutorial easier to understand, it should not be considered a complete and secure implementation for a production environment.

Enough with the disclaimer; get to the solution!

This is actually really easy to setup in Rails with Devise. There are 2 main components. The first is a special API session controller to handle the initial authentication. Since this will not be completed through the standard Rails for and devise controller, we need to make a controller to handle it. I recommending creating a specialized session controller to do this so that the API authentication is structurally separate from the rest of your application so that it can be isolated for security and testing purposes.

module Api
class SessionsController < Devise::SessionsController
skip_before_action :verify_authenticity_token

def create
self.resource = warden.authenticate!(auth_options)
sign_in(resource_name, resource)
self.resource.update_attributes(session_attributes)
respond_to do |format|
format.json {render json: {token: generate_token(self.resource)}}
end
end

def
destroy
current_user.update_attributes(shared_secret: nil, token_expires: nil)
super
end


private

def
generate_token(resource)
JWT.encode(token_payload(resource), resource.shared_secret, 'HS256')
end

def
token_payload(resource)
{user: resource.email, exp: 1.week.from_now}
end

def
session_attributes
{
shared_secret: create_secret,
token_expires: 1.week.from_now
}
end

def
create_secret
SecureRandom.alphanumeric(127)
end
end
end

It should be pretty self explanatory. It accepts an email and password and performs the same actions as a regular Devise controller. it signs the user in, and then updates the user record with a randomly generated secret and sets an expiry for the secret generation.

You also need to configure a route so you can post the email and password to this controller. Its a little more complicated than usual but not overwhelmiingly so:

namespace :api do
devise_for :users, skip: :all
devise_scope :user do
post 'users', to: 'sessions#create', as: nil
end
end

This technique of authenticating the user session can be used so you can determine how long it has been since the user ACTUALLY authenticated. You could write a rake task to automatically invalidate all secrets older than a specified duration. Our example will not go into the expiry of the token, but it should be easy for any experienced Rails dev.

Next the actual API controller class. This is the meat and potatoes. All your API controllers should inherit from this controller. What this will do is bypass the usual Devise authentication process and instead look at the request header for the Authentication Token. You must supply two values. An API Key that has been uniquely assigned to each user is paired with a valid Authentication Token. This is to help strengthen the JWT Token so that it is not solely responsible for the authentication (both the JWT and the API key must be compromised).

After the API Key and the JWT Authentication Token have been verified, the system will allow the continuation of the child controller action. Notice that the current_user and user_signed_in will be available as normal.

module Api
class ApiController < ApplicationController
skip_before_action :verify_authenticity_token
skip_before_action :authenticate_user!
before_action :authenticate_api_key!
before_action :authenticate_user_from_token!
protect_from_forgery with: :null_session


protected

def
current_user
@resource
end

def
user_signed_in?
!@resource.nil?
end


private


def
authenticate_user_from_token!
@resource ||= user_with_key(apikey_from_request).where(email: claims[0]['user']).first
if @resource.nil?
raise Pundit::NotAuthorizedError.new('Unable to deserialize JWT token.')
end
rescue
StandardError => e
Rails.logger.error e
raise Pundit::NotAuthorizedError.new(e)
end

def
authenticate_api_key!
if apikey_from_request.present?
unless user_with_key(apikey_from_request).present?
raise Pundit::NotAuthorizedError.new('Unable to verify the apii key.')
end
end
end

def
claims(token = token_from_request, key: shared_key)
JWT.decode(token, key, true)
rescue JWT::DecodeError => e
raise Pundit::NotAuthorizedError.new(e)
end

def
jwt_token(user, key: shared_key)
expires = (DateTime.now + 1.day).to_i
JWT.encode({user: user.email, exp: expires}, key, 'HS256')
end

def
token_from_request
# Accepts the token either from the header or a query var
# Header authorization must be in the following format
# Authorization: Bearer {yourtokenhere}
auth_header = request.headers['Authorization']
token = auth_header.try(:split, ' ').try(:last)
unless Rails.env.production?
if token.to_s.empty?
token = request.parameters.try(:[], 'token')
end
end
token
end

def
apikey_from_request
# Accepts the ApiKey either from the header or a query var
# Header
ApiKey must be in the following format
#
ApiKey: {yourkeyhere}
key = request.headers.try(:[], 'ApiKey').try(:split, ' ').try(:last)
if !Rails.env.production? && key.blank?
key = request.parameters.try(:[], 'apikey')
end
key
end

def
shared_key
user_secret.tap do |key|
raise Pundit::NotAuthorizedError.new('Unable to verify the secret.') if key.blank?
end
end

def
user_secret
return if apikey_from_request.nil?
user_with_key(userkey_from_request).first.try(:shared_secret)
end

def
user_with_key(key)
return if apikey_from_request.nil?
User.where(private_key: key).where('private_key_expires > ?', Time.zone.now)
end
end
end

There you go. You now have the basis of a pretty good API Authentication Layer for your Rails app!

A few points of note:

  • This code will allow you (for ease of testing) to supply both the JWT and the API Key as query string parameters when NOT in production mode. However, in production, the request MUST use the correct headers.
  • This code assumes the use of the most excellent Pundit gem and raises a Pundit::NotAuthorizedError if the authentication fails. If you use something else, like CanCan you will need to raise an error appropriate to your application.
  • You may want to expose the current_user and user_signed_in? as controller helpers, if you need to access them in your views, but in order to keep this somewhat minimalistic, I have omitted this.
  • If you want more information about the security implications of using JWTs for Authentication and how to mitigate security risks, this is an excellent resource.
  • If you need help learning more about JWTs or if you would like an online tool to help generate valid JWTs for testing JWT.io is a very neat website.

Automatic Rails Model Notification Concerns

This ActiveRecord Concern is a module I am particularly proud of. It was developed for an application that had the need for an advanced notification system on a whole slew of database changes. Rather than just wire up a basic notification job to each controller action that triggered each model change, I elected to write a model concern that automatically triggered the notification system on different ActiveRecord changes.

The concerns worked amazingly well, and assisted in not only keeping our controllers very light, but also meant that database changes could not escape notifying users of the change.


# This concerns allows you to directly hookup ActiveRecord model changes
# directly into a system-wide notification system using ActiveSupport
# Callbacks. Jobs can be created to reflect the exact work you want done
# when a specific event occurs in the lifecycle of the model you want
# to be notified on.
#
# ==== Example
#   class MyModel < ApplicationRecord
#     include Notifyable
#
#     notify :on_create, :handler_job
#
module Notifyable
  # Make this module a concerns and include the ActiveSupport callbacks module
  extend ActiveSupport::Concern
  include ActiveSupport::Callbacks

  included do
    # Add a has_many model association for the notifications events.
    has_many :notification_events, as: :item

    # This code opens up the parent class and generates several methods
    # directly into it providing the core foundation of the
    # Notifyable concerns. It declares the available callbacks
    # and runs the associated callback when the custom defined
    # calback is triggered.
    %w(on_initialize on_update on_save on_create on_commit on_destroy on_find).each {|name|
      module_eval <<-RUBY, __FILE__, __LINE__ + 1
        define_callbacks :#{name}
        #{name.gsub('on', 'after')} :notify_#{name}
        
        def notify_#{name.to_s}
          run_callbacks :#{name} do
            @invocation = :#{name}
          end
        end
      RUBY
    }

    # Opens up the calling class so methods can be redefined on
    # the current object. We need to add the +notify+ method so
    # that we can define what callbacks should be watched to
    # trigger notifications.
    #--
    # FIXME: Class variables for handlers are bugged if
    # FIXME: different models use different handlers. I
    # FIXME: would love to refactor this so that you can 
    # FIXME: provide a &block instead of just a handler
    # FIXME: name/symbol.
    #++
    # Callbacks are always appended *after* the source event
    # declared; so that +:on_save+ will actually declare itself
    # as a +:after_save+ on the parent ActiveRecord class.
    class << self
      def notify(name, *handlers)
        @@handlers ||= HashWithIndifferentAccess.new
        @@handlers.store(name, handlers)
        set_callback(name, :after, :handler_callback)
      end
    end

  end


  protected

  # Execute the callback for each handler invocation. It
  # is expected that there will be a corresponding ActiveJob
  # to handle the notification within a +Notification+
  # namespace with the same name as the invocation class,
  # followed by the handler name.
  #
  # Example:
  #   +Notification::MyClassHandlerJob+
  def handler_callback
    @@handlers[@invocation].each do |handler|
      eval "Notification::#{self.class.name}#{handler.to_s.classify}.perform_later(self, '#{@invocation}')"
    end
  end

end

As directed in the comments of the concerns, the only thing needed to make this work is a method call in your model telling the module when the handler should be notified of the change, and what trigger should it attach. For example, lets assume you have a TodoItem model:

class TodoItem < ApplicationRecord
# Include the Notifyable concerns
include Notifyable

# Instruct Notifyable on which callback and handler should be used.
notify :on_create, :handler_job
end

Lastly, as you can see from the concerns, you now need to create an ActiveJob class called: Notification::TodoItemHandlerJob which will be enqueued whenever a ‘TodoItem’ database record is created. This job can do whatever you need to do in order to notify the relevant stakeholders of a new TodoItem record.

Whats more, this will be done asynchronously to the main thread of your application which should make your application more performant.

There are a few improvements I would eventually like to make to this:

  1. I’d like to package it as a gem and monkey patch it to the abstract ApplicationRecord class so that the concerns is automatically included on all models and that the include statement is not required.
  2. I’d like to be able to pass a &block to #notify instead of the handler job symbol because then you could eliminate that disgusting eval in the protected #handler_callback
  3. There is a bug in this concerns regarding the class variables the concerns. Because the class variables are stored at the class level, and not the ‘consuming’ class using the concerns, the handlers were being overwritten between models (which means that each model most use the same handler symbol). Fortunately, in the project that uses this, the conformity of all the notification handler jobs using the suffix ‘HandlerJob’ was deemed preferred anyway so it was not seen as a big problem. Alas, I’d very much like to fix it one day.

Social Network Aggregate API Factory Design Pattern in Ruby

I worked on a project which allowed users to authenticate using oauth with several well known social media platforms. After users had linked all their social media presences; we wanted to import the posts for each user from each platform. This is a (redacted) sample of how I accomplished this:

First, I have a neat little module that allows me to encapsulate a list of handler objects and a notification method that can trigger the correct handler based on how the handler has subscribed itself in the factory. This is a little confusing, but basically, each provider class will initialise itself into the factory object, having subscribed itself to a particularly type of social network which it is able to process (handle). I simply abstracted this code because I thought it might be very handy for other projects which follow a similar pattern to this:

module EventDispatcher
def setup_listeners
@event_dispatcher_listeners = {}
end

def subscribe(event, &callback)
(@event_dispatcher_listeners[event] ||= []) << callback
end

protected
def notify(event, *args)
if @event_dispatcher_listeners[event]
@event_dispatcher_listeners[event].each do |m|
m.call(*args) if m.respond_to? :call
end
end
nil
end
end

Next we need to develop the Factory object to encapsulate our handler objects. It contains all the configuration attributes of our social network platform API keys and secrets, etc. It instantiates with a hash which has a static method #load to read a specified file (or by default a file in /config/social_network_configuration.json) which returns an instance of itself with the contents of the configuration file passed into the constructor:

require File.expand_path('../../event_dispatcher', __FILE__)

module SocialNetworking
class SocialNetworkFactory
include EventDispatcher
attr_reader :configs

def initialize(data)
setup_listeners
@configs = {}
data.each {|n, o| @configs.store n.downcase.to_param.to_sym, o}
end

def
process(network, user)
notify(network, user)
end

##
# Reads client configuration from a file and returns an instance of the factory
#
# @param [String] filename
# Path to file to load
#
# @return [
SocialNetworking::SocialNetworkFactory]
# Social network factory with API configuration
def self
.load(filename = nil)
if filename && File.directory?(filename)
search_path = File.expand_path(filename)
filename = nil
end
while
filename == nil
search_path ||= File.expand_path("#{Rails.root}/config")
if File.exist?(File.join(search_path, 'social_network_configuration.json'))
filename = File.join(search_path, 'social_network_configuration.json')
elsif search_path == '/' || search_path =~ /[a-zA-Z]:[\/\\]/
raise ArgumentError,
'No ../config/social_network_configuration.json filename supplied ' +
'and/or could not be found in search path.'
else
search_path = File.expand_path(File.join(search_path, '..'))
end
end
data = File.open(filename, 'r') {|file| MultiJson.load(file.read)}
return self.new(data)
end
end
end

The configuration file (/config/social_network_configuration.json) looks something like:

{
"Facebook": {
"oauth_access_token": "...",
"expires": ""
},
"SoundCloud": {
"client_id": "..."
},
"Twitter": {
"access_token": "...",
"access_token_secret": "...",
"consumer_key": "...",
"consumer_secret": "..."
},
"YouKu": {
"client_id": "..."
},
"YouTube": {
"dev_key": "..."
},
"Weibo": {
"app_id": "..."
}
}

The last part is to create a different handler object for each social network (as each social network has its own specific API for interfacing with the platform. Its pretty basic:

module SocialNetworking
module Providers
class NetworkNameProvider

def initialize(factory)
# you can access the configurations through the factory
@app_id = factory.configs[:network_name]['app_id']

# instruct the factory that this provider handles the
# 'network_name' social network oauth. The factory will
# publish the users authorization object to this handler.
factory.subscribe(:network_name) do |auth|

# Do stuff ...

end
end

end
end
end

So an example of a Weibo Provider class might look something like this:

require File.expand_path('../../../../lib/net_utilities', __FILE__)
require 'base62'
require 'httpi'

module SocialNetworking
module Providers
class WeiboProvider
include NetUtilities

def initialize(factory)
@token = get_token
@app_id = factory.configs[:weibo]['app_id']

factory.subscribe(:weibo) do |auth|
Rails.logger.info " Checking Weibo user '#{auth.api_id}'"

begin
@token = auth.token unless auth.token.nil? # || auth.token_expires < DateTime.now
request = HTTPI::Request.new 'https://api.weibo.com/2/statuses/user_timeline.json'
request.query = {source: @app_id, access_token: @token, screen_name: auth.api_id}
response = HTTPI.get request
if response.code == 200
result = MultiJson.load(response.body)
weibos = result['statuses']

weibos.each {|post|

... do something with the post

}
end
auth.checked_at = DateTime.now
auth.save!
rescue Exception => e
Rails.logger.warn " Exception caught: #{e.message}"
@token = get_token
end
end
end


private

def
get_token
auth = Authorization.where(provider: 'weibo').where('token_expires < ?', DateTime.now).shuffle.first
auth = Authorization.where(provider: 'weibo').order(:token_expires).reverse_order.first if auth.nil?
raise 'Cannot locate viable Weibo authorization token' if auth.nil?
auth.token
end

def
uri_hash(id)
id.to_s[0..-15].to_i.base62_encode.swapcase + id.to_s[-14..-8].to_i.base62_encode.swapcase + id.to_s[-7..-1].to_i.base62_encode.swapcase
end

end
end
end

Of course there are a lot of opportunities too refactor and make the providers better. For example a serious argument could be made that the API handshake should be abstracted to a seperate class to be consumed by the provider rather than the provider doing all the API lifting itself (violates the single-responsibility principal) – but I include it inline to give better idea on how this factory works without getting too abstracted.

The last piece of this puzzle is putting it all together. There are a lot of different ways you could consume this factory; but in this example I am going to do it as a rake task that can be regularly scheduled via a cron task.

Dir["#{File.dirname(__FILE__)}/../social_networking/**/*.rb"].each {|f| load(f)}

namespace :social_media do
desc
'Perform a complete import of social media posts of all users'
task import: :environment do
factory = SocialNetworking::Atlas::SocialNetworkFactory.load
# Instantiate each of your providers here with the factory object.
SocialNetworking::Atlas::Providers::NetworkNameProvider.new factory
SocialNetworking::Atlas::Providers::WeiboProvider.new factory

# Execute the Oauth authorizations in a random order.
Authorization.where(muted: false).shuffle.each do |auth|
factory.process(auth.provider.to_sym, auth)
end
end
end

I wouldn’t do this in production though, as you may encounter problems if the task gets triggered when the previous iteration is still running. Additionally, I would recommend leveraging ActiveJob to run each handler which would give massive benefits to execution concurrency and feedback on job successes/failures.

Also, you could get really clever and loop over each file in the /providers directory and include and instantiate them all at once, but I have chosen to explicitly declare it in this example.

As you can see this is a nice little pattern which uses some pseudo-event subscription and processing to allow you to import from multiple APIs and maintaining separation of responsibilities. As we loop over each authorization record, this pattern will automatically hand the auth record to the correct handler. You can also chop and change which providers are registered; as any authorization record that doesn’t have a registered handler for its type, will simply be ignored. This means that if the Weibo API changes and we need to fix our handler; it is trivial to remove the handler from production by commenting it out, and all our other handlers will continue to function like nothing ever happened.

This code was written many years ago; and should work on Ruby versions even as old as 1.8. There are probably many opportunities too refactor and enhance this code substantially using a more recent Ruby SDK. Examples of possible enhancements would be allowing the providers to subscribe to the factory using a &block instead of a symbol and allowing the factory to pass a block into to #process method to give access for additional processing to be executed in the context of the provider; but abstracted from it.

Nevertheless, I hope that this pattern proves useful to anyone needing a design pattern to have a handler automatically selected to process some work without complicated selection logic.

Skipping an ActiveRecord Callback Programatically

I am a massive fan of ActiveSupport callbacks and use them frequently. This allows me to chain behaviours together; essentially using data storage as an event based system to enforce business logic. An example of this, is to use after_create_commit callbacks to automatically trigger an email that needs to be sent; such as welcome user email, to automatically generate some accounting record, or an admin email notification.

This does have some drawbacks however. It means that you really need to have a good grasp of the domain logic; and means that it becomes critically important to choose the correct way to update the record (update_column vs update_attribute) lest you fail to trigger important business logic, or trigger them when you shouldn’t. But generally, when used appropriately I find them invaluable. But sometimes you might find yourself in a situation where you need to run some code (such as in a rake task) where you cannot influence the method used to update the database, but the callbacks must not be executed.

As I said, if you have direct control over the ActiveRecord relation, then its easy:

@object.update_column(:the_attribute, 'value')
@object.update_columns(attributes)

These will update the database, but skip validations and callbacks.

But if the updates are being triggered by another class/code outside of the scope or control of where you are, this wont work. Perhaps you care calling a method of a related class, and that method specifies .update_attribute and you cannot change it. What then?

Fortunately, there is a solution.

Lets say you have a class definition:

class User < ActiveRecord::Base
after_save :my_method
end

There are 2 ways you can save the object without triggering the callback.

Method #1:

User.send(:create_without_callbacks)
User.send(:update_without_callbacks)

But thats super gross! Using .send is a code smell if ever there was one.

Method #2:

User.skip_callback(:save, :after, :my_method)
User.create!

This is much more civilised. What’s more, after you’ve run your rspec, mock, rake task whatever: you can reset the callback with: User.set_callback(:save, :after, :my_method).

Old School Cache Invalidation in a world of Rails 5.2 Recyclable Cache Keys

One of the hallmark features of Rails 5.2 was the introduction of recyclable cache keys.

Once the new cache API was integrated into Basecamp, DHH had this to say about it:

We went from only being able to keep 18 hours of caching to, I believe, 3 weeks. It was the single biggest performance boost that Basecamp 3 has ever seen.

Put simply, the previous cache key generation (unless you over-wrote it) used the updated_at timestamp to differentiate the specific version fo the object. So for example [class]/[id]-[timestamp], like users/5-20110218104500 or thing/15-20110218104500, which is what Active Record in Rails used to do by default when you call #cache_key.

With Rails 5.2 this no longer works as you expect. Instead, it’s simply [class]/[id] -the timestamp is dropped.

The basic idea behind it is that the cache object can be updated directly; and thus cache keys can be reused, dramatically lowering the quantity of cache garbage and increasing the amount of useful objects stored in the cache; which should let you see a significant performance hit by increasing your successful cache hits.

The only drawback is that the cache needs to be updated using the new cache API – if you don’t do this, you will suddenly see that your cached objects no longer ‘automatically’ expire.

Whilst I do recommend upgrading your codebase to use the new cache API, you can disable it and return to the older #cache_key style (using update_at timestamp) by simply adding this to your config/environments environment file:

config.active_record.cache_versioning = false

This should get you working in the old way – I will write a post on how to maximise use of the new Rails cache API in a future post.