There are 16 17 servers that power CodePen.

Links from the episode:

Comments

  • Enjoying the podcast. Route 53 comes from: DNS runs on tcp/udp port 53 🙂

  • TJ Bowman

    I don’t have a lot of experience with a lot of the tech you’re discussing but it’s still interesting to hear what goes into a site like CodePen. Looking forward to more podcasts.

  • punkydrewster713

    Thanks for posting these – they are really informative. If you are taking requests for future episodes, it would be cool to hear about your processes for iterations – how ideas and features are fleshed out and mocked up before diving into the development part.

  • Great episode. I get more interested in this side of things everyday. I get the same embarrassed feeling when someone starts talking about servers where I work.

  • davidhemphill

    Love these podcasts. So helpful for getting an idea of how this stuff runs in a real production environment. I know that you mentioned getting all this running locally, is it a smaller version of production? Like all the services on one VM with a small development database?

    • tsabat

      We’ve fleshed out a vagrant box containing all the code/services/data, but scrapped it because Rails autoloading (the traversing of the entire codebase) interacted poorly with the nfs mount from host to guest. With the v1.5 release of Vagrant and rsync shared folders, we’ll probably revisit this.

      For the moment, we’e just set up our main Rails app so that it does not collapse if any one service is missing and start/stop those we require to do our dev. For example, codepen can operate locally without screenshot/search/realtime/preprocessors.

      • davidhemphill

        Very interesting and thanks for the response. That sounds like it be a mammoth VM. I’ve been wondering how bigger apps like Codepen, Github, Twitter, Pinterest, etc run locally and with a real dataset to work with.

        • tsabat

          The size was manageable. The page load speed was not. DB girth is more of a problem than service sprawl. At some point, having constraints set up through something like https://github.com/matthuhiggins/foreigner could help in pulling down a subset (say user IDs 1 – 10,000) of the data, rather than the whole caboodle.

          • davidhemphill

            Interesting. That’s been my experience. Our VM is slow and our site has a player that has to keep playing. Pjax was timing out nearly every request. We need to look into not pulling a 1:1 copy of the DB.