Editors and iframes

Before building Quill, I surveyed the landscape of rich text editors. A common characteristic of all the powerful and widely used ones operated within an iframe–but the reason for this was more of a mystery. It’s hard to go against the grain when literally all the popular editors were aligned on iframes, but at the same time most of them made this configurable. What I hoped to find was a differentiator that necessitated the use of iframes.

I originally ended up deciding to go with iframes for Quill as well. But after experiencing firsthand the consequences of this choice and being able to speak directly with other editor authors, I now believe isolating Quill in an iframe is a mistake. This post is what I wished was available when I started, and a summary of all the knowledge I have gathered on the topic.


The most discussed benefit of iframes is separation from other styles and scripts on the web page. The benefit is bidirectional, but the challenging side is external scripts and styles affecting the editor. For one, it’s an unknown quantity. But also when conflicts do arise, users largely blame the editor. One early user of Quill had a script that literally monitored for and removed every <br> tag on the page1.

The problem with this benefit is it only reduces the likelihood of conflict. If users wanted to mess with the editor, they still can and iframes will offer no protection2. So we can only consider unintentional interference. This is a diminishing benefit as front end debugging tools become more powerful and library authors more responsible and conscious of their footprint (probably from natural selection). To my knowledge there is no widely used library that interferes with the operations of Quill3.

There are tangible drawbacks to the “protection” afforded by iframes however. Browser specific behaviors that are expected and desired will be prevented. Here are a few examples:

  • The editor might be used as a rich form field and tabbing into it will not work without custom logic.
  • The editor also cannot autogrow with the content since iframes do not autogrow4. This limitation also lacks a good workaround5.
  • Analytics libraries are commonplace today and an iframe is essentially an events black hole.
  • The back button controls history of the page and not the iframe, so hitting the back button after clicking on a hash link inside an iframed editor linked to another section of the editor will not work as expected.

The cursor position is also saved by default per document. This is useful for example when the user clicks on a bold button outside the editor, the browser will restore the cursor over the right position when the editor gains focus again. This is not the case in IE however, so saving and restoring the cursor needs to be handled in a cross browser editor anyways.


Another benefit of an editor being isolated in an iframe is that one can choose and control all aspects of the document without fear of adversely affecting other parts of the web page. Practically, this means controlling the doctype, the contents of <head>, and a feature called designMode.

Doctypes have their own fun filled history but today there is little fragmentation and misuse. Quirks mode has been shamed out of existence and an astounding 92.29 percent of the top 10,000 sites6 use the standard declaration. Even the next several percentages of popular declarations have no adverse affects on Quill7. Quirks mode will most likely break an editor but its usage is now more of a user problem than a technological one.

The only legitimate reason I can think of for an editor to modify the <head> is to add <style> tags8. This should not adversely affect the rest of the page if properly namespaced. Some bookkeeping does need to be done when supporting multiple instances of an editor but given caching it is not clear there are noticeable effects of having duplicate <style> tags other than sloppiness.

Many rich text editors, including Quill, depend on contenteditable, the browser technology that makes the DOM directly editable. Born from the same loving womb of IE 5.5 was designMode, which is essentially contenteditable except it is applied to an entire document as opposed to a single DOM element9. There was a period of time where a browser supported one but not the other (most notably Firefox 2 supported designMode but not contenteditable), and even when both were available, one implementation was often less buggy than the other.

Moving Forward

For the next version of Quill, iframes will be removed altogether and the benefits are already being felt even before release. A much simplified codebase no longer needing to manage an iframe and track multiple window and document objects has already enabled a 4k savings of minified code.

Only the original authors of the respective editors can answer definitively as to why they chose to utilize iframes but I believe the reason is largely historical. The prevalence of quirksmode and lack of availability and stability of contenteditable left little choice but to use an iframe from 2000 to perhaps as late as 2010. But today this is no longer the case and there remains little reason to continue this artifact of history.

  1. This is actually what pushed Quill over the fence. Prior to this, using iframes was configurable.
  2. Note the Same Origin Policy does not offer any protection here.
  3. Besides Bootstrap changing the default box-sizing, but style interferences can always be corrected with a more specific rule (and Quill happens to also prefer border-box).
  4. Except currently mobile Safari content will flow out of the iframe which this is a security bug (maybe someone will demonstrate a phishing attack to get the Safari team to fix this).
  5. You will need to listen on content changes which means either mutation observers or mutation events. The latter is deprecated but the former is asynchronous so there will be a delay between the content changing and the iframe being resized. Hacks like listening on keyboard events provide an incomplete solution since the content might change in other ways (paste, drag and drop, etc).
  6. Top 10,000 according to Alexa.
  7. They shouldn’t from a standards perspective, but hey browser vendors sometimes do crazy things.
  8. And even this should be discouraged given development and adoption of Content Security Policy.
  9. Not clear why Microsoft would invent a technology and a seemingly superset technology at the same time.
Thanks to Byron Milligan and Steven Wu for reviewing this post and David Greenspan and Mime Čuvalo for answering my iframe questions.

Binary ANSI Art in the Terminal

Check Ansize out on Github: https://github.com/jhchen/ansize

A big problem in managing servers is that from the terminal they all look the same. You don’t want to start poking around on your production server when you meant to have logged into your staging server. At Stypi, we had a very clear way to make the distinction:



We are using the Message of the Day (motd) feature, which will print the contents of /etc/motd every time you log in.

To automate the process, I built Ansize: a quick tool to help you convert images to binary ANSI art. Just give it the image you want to convert, the output file, and optionally the width in number of characters. Then, move the file onto your server to replace the /etc/motd file:

git clone git@github.com:jhchen/ansize.git
cd ansize
go build ansize.go
./ansize image.png image.ansi
scp image.ansi user@yourserver.com:/etc/motd

Now every time you log into the server, you’ll be greeted with a colorful reminder!

Credit to Patrik Roos from text-image.com for the online image conversion that was the inspiration. Even though there’s a hyphen in the domain name, it was the best converter I found.


Mechanics of a Small Acquisition

When a startup successfully exits, chances are it was an acquisition. Unfortunately for the founders, that acquisition was likely their first, while the acquirer has probably gone through many. This was the case with Stypi. Fortunately, my cofounder and I were lucky enough to have had access to several other acquired founders who helped us ultimately navigate our first multimillion-dollar exit for a company barely a year old. Hopefully by sharing what we learned and encountered, you can be slightly less lost, should you be faced with an acquisition of your own.

DISCLAIMER: Every startup and founder group is unique so the data provided here may not apply in all cases. In particular, it is skewed towards < $25m acquisitions made by much larger companies. An acquisition is important enough that you should always do your own research. Reach out to acquired founders in your space, company size range, and/or acquired by the same company that may acquire your company. If any of these apply to me feel free to contact me at jason [at] stypi [dot] com.

First Contact

Potentials acquirers are a lot like the opposite sex: their intentions are confusing but the prospects are exciting. It will likely be a founder or the Corporate Development department of a larger company, acting on behalf of an interested internal team, that first approaches you. Either way, get comfortable with this person, as he/she will be setting up and facilitating your meetings, and will likely be the one you eventually haggle with over terms.

Before moving forward, it is worth noting that the cost of pursuing an acquisition is nonzero—in fact, it’s quite high. You and your cofounder(s) will be consumed by the process for weeks and divulge a lot of otherwise private information to potential competitors. But, more importantly, rejection hurts. Believing that the finish line is just yards away and finding that it is actually miles is not going to be good for your company’s morale.


Assuming the decision is to open your company’s kimono and pursue an acquisition, you and your cofounder(s) will end up going on a series of meetings with the potential acquiring company (except at the pace of trying to decide if you want to get married by the end of the week). For larger companies, Mutual NDAs will likely be signed by the first meeting. Decisions to continue the relationship are made very quickly after each meeting. This of course goes both ways—we decided to end the relationship early with more than half of interested potential acquirers.

If feelings are positive on both sides, arrangements would be made to meet again, probably with different people. Regardless of who we were meeting with, they were usually available the next or following day, which seem to suggest the priority for acquisition meetings are reasonably high. One founder, we discovered, cancelled racing Audi R8s to meet with us.

The Term Sheet

We averaged three to four of these dates before we were invited into bed with a term sheet. The relationship is not exclusive so it is kosher and advisable to be pursued by multiple potential acquirers in parallel. Ideally they would be lined up such that multiple terms sheets would come at roughly the same time. Different companies prefer to have varying amounts of information or certainty before putting together a term sheet. In general, the time spent or saved here will be made up for during due diligence. But, for your purpose of timing the term sheets, it is not unreasonable to ask about the potential acquirer’s timeline and form expectations.

Once you get your first term sheet, it’s time to celebrate with your cofounder(s). If the offer is interesting enough, then it’s also time to get a lawyer. Ideally other offers will soon be coming in and you can use them to get better terms from each company. While you will be negotiating the key terms, your lawyers will be taking care of less familiar ones like escrow, indemnification, etc. They’ll explain what this means, the consequences, and for significant ones, ask how hard you want to push for more favorable terms. For these, we usually just asked them to push for market terms.

It took us roughly a week to agree on the terms and decide which company Stypi would join.

Due Diligence and Closing

Now it’s time for monogamy. When you sign a term sheet there will also be a separate exclusivity agreement that requires you to reject any other outstanding offers and for a period of 45 days, remain faithful and not solicit other offers. There will be more meetings and every document your company has ever signed will be scrutinized (which your lawyers will want to screen before sharing) to make sure you did not misrepresent your company’s position. This process should be taken seriously as either party can still back out of the deal. But the default outcome is that the deal will close and unless you are hiding something big, like a lawsuit or stealing code, there shouldn’t be anything to lose sleep over.

In parallel with due diligence, weighted towards the tail end, preparations will begin for closing. Mechanically this means filling out and signing a lot of forms. A lot of time will be spent chasing down information from people connected to your company, for example your advisors’ addresses or investors’ wire instructions. Many will need signatures so there might actually be some physical chasing down.

The complexity of the deal or parties involved will dictate the time frame but for small companies, this process will take roughly 3-4 weeks.


What happens after exit is entirely up to up to you. Life can be as different or as similar to what it was before. Some founders take time off, some throw massive parties, and others buy and crash brand new Lamborghinis. For Stypi, life has not changed very much and we went straight to work the day after closing. We still have a long way to go before Stypi becomes what we envisioned and we chose an acquirer that shares and wants us to pursue that vision unencumbered.


Optimizing WordPress Performance

Good programmers know the dangers of premature optimization; but even my techiest friends are surprised to find how little horsepower it takes to bring down their site. Let’s find out how much:

>> siege http://www.blog.com -t30s -c10 -b
** SIEGE 2.70
** Preparing 10 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 389 hits
Availability: 100.00 %
Elapsed time: 29.40 secs
Data transferred: 2.76 MB
Response time: 0.75 secs
Transaction rate: 13.23 trans/sec
Throughput: 0.09 MB/sec
Concurrency: 9.90
Successful transactions: 389
Failed transactions: 0
Longest transaction: 1.21
Shortest transaction: 0.50

The blog was only able to handle an unlucky 13 requests per second. That’s it. In reality that number will be even lower because of additional content and plugins installed. Thus the average latop today can easily bring an unoptmizied WordPress blog to its knees. However with a few simple optimizations, you can more than 10x the amount of traffic your blog can support.

Even if you are not worried about DoSers or jealous (and tech savvy) girlfriends, making the site fast and responsive improves the user experience. You’ll also never have to worry about posting something too popular that your blog buckles under the surge of interested readers. We’ll do this by optimizing the backend with WP Super Cache and Varnish and optimizing the frontend with compression and WP Minify.

Test Setup

All tests are run against a newly installed WordPress (v3.4.1) site running on a Linode 512 (512MB RAM) served by Nginx (v1.2.2). For exact setup/configurations, refer to my Installing WordPress on Linode guide. I have edited my hosts file to point www.blog.com to this server (I unfortunately don’t actually own that domain).

Thoughout this post, I will be using Siege and Google PageSpeed Insights to benchmark our progress.

Disclaimer: Only benchmark sites you own or have explicit permission to do so. Otherwise it can be viewed as an attack which will put you in loads of trouble with your parents (and the law).

Backend Optimizations

Two easy ways to optimize the backend are installing WP Super Cache or Varnish. You get more mileage out of Varnish and it’s a general purpose solution but it’s slightly harder to install and set up. I’ll go over both but feel free to skip the WP Super Cache section if you intend on just using Varnish.

WP Super Cache

WP Super Cache is just a WordPress plugin so just install it like any other WordPress plugin and enable it with basic settings. Now let’s lay siege:

>> siege http://www.blog.com -t30s -c25 -b
** SIEGE 2.70
** Preparing 25 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 4555 hits
Availability: 100.00 %
Elapsed time: 29.82 secs
Data transferred: 32.89 MB
Response time: 0.16 secs
Transaction rate: 152.75 trans/sec
Throughput: 1.10 MB/sec
Concurrency: 24.92
Successful transactions: 4555
Failed transactions: 0
Longest transaction: 1.39
Shortest transaction: 0.09

A whopping 11x improvement! WP Super Cache saves us from the biggest bottleneck which is the database, and as you can see it’s quite significant. But we can do even better.

The astute observer will notice that I changed the concurrency from 10 to 25. The reason is if the server is responding faster than we are requesting, we are not truly testing the full limits of the server. However if we send too fast, the server will choke. So while I only show one siege result, I have actually run several trials and am only posting the optimal one.


Varnish will sit in front of Nginx and cache requests, so it can skip Nginx and everything behind it (which includes WP Super Cache). Thus Varnish is a general purpose solution that can optimize any site, not just WordPress. It’s not quite as easy to set up but still relatively simple. To install: yum install varnish

Now open up /etc/sysconfig/varnish and find the line VARNISH_LISTEN_PORT and set it to 80


Now edit /etc/varnish/default.vcl and replace the backend port with a high one of your own. I will use 8080:

backend default {
  .host = "";
  .port = "8080";

Now go to all your nginx configuration files and change it to listen on this custom high port instead of 80. Which means /etc/nginx/nginx.conf, /etc/nginx/sites-available/*, and /etc/nginx/conf.d/*. Now restart nginx (service nginx restart) and start varnish (service varnish start) and let’s lay siege again:

>> siege http://www.blog.com -t30s -c50 -b
** SIEGE 2.70
** Preparing 50 concurrent users for battle.
The server is now under siege...
Lifting the server siege... done.
Transactions: 4912 hits
Availability: 100.00 %
Elapsed time: 29.54 secs
Data transferred: 35.47 MB
Response time: 0.28 secs
Transaction rate: 166.28 trans/sec
Throughput: 1.20 MB/sec
Concurrency: 45.90
Successful transactions: 4912
Failed transactions: 0
Longest transaction: 5.65
Shortest transaction: 0.07

We managed to squeeze another 12 requests/second over WP Super Cache!

Frontend Optimizations

While siege is fun, most of the sluggishness a user experiences is on the frontend. Google PageSpeed Insight is a great tool to find out ways we can improve on this front. After installing and running we can see we need to enable compression and optimize our css and javascript.

Pagespeed Test Results


To enable compression, open your nginx configuration /etc/nginx/nginx.conf and uncomment the line #gzip on and add the following:

gzip on;
gzip_http_version 1.0;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
gzip_buffers 16 8k;

Minify and Combine JS/CSS

To minify/combine javascript and css, just install and the WP Minify plugin. The plugin misses some files, ex. theme files, so for those just go to WP Minify’s settings -> “Show Advanced Options” -> and enter each file into the “Non-Local Files Minification” textarea.

We rerun PageSpeed Insight and get an almost perfect 99/100.

Pagespeed Test Results

With just these simple optimizations of installing Varnish/WP Super Cache, enabling compression, and js/css minification, your blog can handle hundreds of requests a second and load with blazing speed on the browser. Now that being slashdotted is no longer a concern, off you go to write some content to go viral!


Installing WordPress

WordPress has a fairly straightforward installation process, but we first need to prepare the web and database servers. WordPress requires only the LMP of LEMP but this guide is targeted towards Nginx users (sorry Apache). We will cover the Nginx configuration, MySQL setup, and finally the WordPress installation itself.


Ideally you are separating your nginx configuration files per site, which you are if you followed my LEMP setup guide. Create and enable a new configuration for the new WordPress blog:

cd /etc/nginx/sites-enabled
touch /etc/nginx/sites-available/jasonchen.me.conf
ln –s ../sites-available/jasonchen.me.conf jasonchen.me.conf
vim jasonchen.me.conf

If you are using a single configuration file, it is most likely in /etc/nginx/nginx.conf. Open it with your favorite text editor:

vim /etc/nginx/nginx.conf

Now paste the following lines into the new file or between the http block:

server {
    listen      80;
    server_name www.jasonchen.me;
    access_log  /var/log/jasonchen.me/access.log;
    error_log   /var/log/jasonchen.me/error.log;
    root        /opt/jasonchen.me;
    index       index.php;
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    location / {
        try_files $uri $uri/ /index.php;
    location ~ \.php$ {
        include       /etc/nginx/fastcgi_params;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME /opt/jasonchen.me$fastcgi_script_name;
    location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
        expires max;
        log_not_found off;
server {
    listen  80;
    server_name jasonchen.me;
    server_name_in_redirect off;
    rewrite  ^ http://www.jasonchen.me$request_uri?  permanent;

This essentially tells nginx to serve a website named www.jasonchen.me, with php support. Static files are served directly and not logged. All traffic to jasonchen.me will also be redirected to www.jasonchen.me. We should now create the log directory for nginx, create a dummy index.php file to test this configuration, and restart nginx.

mkdir /var/log/jasonchen.me
mkdir /opt/jasonchen.me
echo "<?php phpinfo(); ?>" >> /opt/jasonchen.me/index.php
service nginx restart

Now if you visit your site from a browser, you should see the purple php info page.


For security, we are going to create a separate MySQL user and database for wordpress. To do so log in as an admin:

mysql –uroot –p

and enter in your password. Now in the mysql console, we will create a database and a dedicated user with permissions only for that database:

create database wordpress;
create user 'wordpress'@'localhost' identified by 'password';
grant all privileges on wordpress.* to wordpress@localhost;

Now make sure this works by logging in exiting out of the current mysql root console (CMD+D) and logging in as your new user:

mysql -uwordpress -ppassword

If you get the mysql console we are now finally ready to install WordPress.


Now let’s navigate to the installation directory and remove the dummy php test we added earlier:

cd /opt
rm jasonchen.me -rf

Next, download wordpress, extract and rename it:

wget http://wordpress.org/latest.tar.gz
tar -xvf latest.tar.gz
mv wordpress jasonchen.me

WordPress should now work but we will want to fix the file ownership so they are owned by apache.

chown -R apache:apache jasonchen.me

This way you can easily install plugins and updates from the web interface. Why apache and not nginx? Php-fpm runs under the user apache by default. You can change this in /etc/php-fpm.d/www.conf if you would like.

Now visit your blog in the web browser and follow the instructions and you should be set to configure your new blog!

I’ll leave most of the customization up to you but will mention one. By default posts are in the form jasonchen.me/?p=123. However most people want readable URL’s like this: jasonchen.me/2012/07/sample-post/. WordPress almost does this for you but they include index.php in the name: jasonchen.me/index.php/2012/07/sample-post/.

If you want to get rid of index.php just select “Custom Structure” in the Permalink Settings page: jasonchen.me/wp-admin/options-permalink.php and change enter in /%year%/%monthnum%/%postname%/ in the textbox. If you followed the nginx configuration instructions above, your web server is already configured to handle this.

Congratulations on your new and now fully functional WordPress blog!

If you want to make it blazing fast, check out my Optimizing WordPress Performance guide.


Setting up a LEMP Server on CentOS 6

We are going to set up a Linux, Nginx, MySQL and PHP (LEMP) server. Technically, this only covers the EMP part since I am assuming you have root access to a server, which requires an operating system to already have been installed–Centos 6 to be specific. I’m also assuming you already have DNS properly set up for the site you want to host. I will be using vim but you can of course use whatever editor you prefer.

As always before you get started be sure to: yum update


To get the lastest stable version and avoid building from source, add the nginx repository to yum: vim /etc/yum.repos.d/nginx.repo

Paste this configuration:

name=nginx repo

Now it’s just simply:

yum install nginx
service nginx start

If you visit your server’s ip address (mine’s you should see a Welcome to Nginx page.

Next we’ll set up your web page. I highly recommend using virtual hosts for ease of management. The idea is setting up separate directories and files to enable managing each site’s configuration individually without affecting the others. To do this, edit your nginx.conf:

cd /etc/nginx vim nginx.conf Add the following include line to the http block in nginx.conf:

http {
  # ...
  include /etc/nginx/sites-enabled/*;
  # ...

Now we need to set up the config files for your particular site (assuming you are still in the /etc/nginx directory):

mkdir sites-enabled
mkdir sites-available
cd sites-available
vim jasonchen.me.conf

You can call your configuration files whatever you want but I always name them after site domain name so there’s no ambiguity in which site I am affecting. Paste this into the configuration file:

server {
  listen       80;
  server_name  www.jasonchen.me jasonchen.me;
  access_log   /var/log/jasonchen.me/access.log;
  error_log    /var/log/jasonchen.me/error.log;
  location / {
    root   /opt/jasonchen.me/;
    index  index.html;

Note: you should change the server_name, access_log, error_log, and root values to whatever it is for your site. As its name suggests, the access_log and error_log is where nginx will log to. Most programs log to /var/log so I put them there, namespaced by the domain name, but again you can choose whatever name and location pleases you. If those directories have not been created, you should do do that now:

mkdir /var/log/jasonchen.me
mkdir /opt/jasonchen.me

This configuration file is in the sites-available folder and our nginx only includes configurations in the sites-enabled folder so we will have to link the file:

cd /etc/nginx/sites-enabled
ln -s ../sites-available/jasonchen.me.conf jasonchen.me.conf

Now enabling/disabling a site is as easy as creating/destroying a symbolic link. Now let’s set up a dummy html page to test your configuration:

cd /opt/jasonchen.me echo “I just set up nginx with virtual hosts!” >> index.html Now, if you set up DNS properly, you should be able to visit your site and see a page with “I just set up nginx with virtual hosts” (isn’t quirks mode great?).


Next, we install php. To get the latest version working with nginx, you may need to add the epel and remi repositories. CentOS is a very conservative distribution so it takes a bit longer for them to add new versions of software to their repository. Epel and remi fills in the gaps for those of us that wants the latest and greatest.

rpm -Uvh http://linux.mirrors.es.net/fedora-epel//6/x86_64/epel-release-6-6.noarch.rpm
rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm

Remi is not enabled by default so you should vim /etc/yum.repos.d/remi.repo. Under [remi] change enabled=0 to enabled=1.

We will be using php-fpm. Alternatives are just plain php or fastcgi but php-pfm offers greater performance and customization, should you need it. I also had the pleasure of working alongside Rasmus Lerdorf for a summer and he uses php-fpm. So there, php-fpm it is.

With remi and epel enabled, installing and starting php for nginx is as easy as:

yum install php php-fpm php-common
service php-fpm start

Now you will need to modify your nginx configuration to use php-fpm. If you followed the nginx section, simply add a location block inside the server block. So the configuration should look like this (vim /etc/nginx/sites-available/jasonchen.me.conf):

server {
  listen       80;
  server_name  www.jasonchen.me jasonchen.me;
  access_log   /var/log/jasonchen.me/access.log;
  error_log    /var/log/jasonchen.me/error.log;
  location / {
    root   /opt/jasonchen.me/;
    index  index.php index.html;
  location ~ \.php$ {
    include /etc/nginx/fastcgi_params;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME /opt/jasonchen.me$fastcgi_script_name;

Again, please substitute your own values for server_name, access_log, error_log, and root. Now remove the index.html file in your site root folder (rm /opt/jasonchen.me/index.html) and add an index.php file (vim /opt/jasonchen.me/index.php) with the following:


Now restart nginx (service nginx restart) and visit your site in a web browser and you should see the familiar php info page.


Last, and possibly least, we need to install MySQL.

yum install mysql-server mysql-php
service php-fpm restart

You should secure your installation so run:


Follow the instructions and once you are done you should be able to run

mysql -u root -p

and start poking around your shiny new database server.

Wrapping Up

Okay so MySQL was not the last thing we needed to do (lest you ignore this sage advice!). You should add nginx, php and mysql to startup so if you need to reboot, your site also restart automatically:

chkconfig --add nginx
chkconfig --add php-fpm
chkconfig --add mysqld
chkconfig --levels 235 nginx on
chkconfig --levels 235 php-fpm on
chkconfig --levels 235 mysqld on

If you want to be really advanced you should install monit or some other monitoring service to make sure none of these components go down, but that is another topic for another day.

That’s it! Enjoy your new LEMP server.


Installing WordPress on Linode

When setting up this blog, I realized that there were quite a few steps to setting up your own hosted WordPress installation. So I wrote down all the steps I took and will share them with you here. This tutorial goes through the whole pipeline, which is quite lengthy, so it will be broken up into four sections.

My only assumptions are that you have picked a domain name and have root access to a server running CentOS 6. The former is a much blogged about topic and the latter is covered here: Getting Started with Linode. This guide is targeted towards Linode users but in theory you should be able to use another host and still follow along.


  1. Server Setup
  2. WordPress Installation
  3. Optimizing Performance
© Copyright 2015, All Rights Reserved