Note: This discussion was imported from Loomio. Click here to view the original discussion.
Proposal: No server side rendering of data for the web frontend
I am new to the project, so I don’t know for sure how much if the current web front-end is client-side rendering and how much is rendered on the server side. However, I think we need to enforce a policy of no server side rendering of data. That means the web front end would need to use the same API to access and render data that third party applications do (including mobile apps). In cases where page load performance needs to be optimized (I.e. Twitter renders some pages on the server so links to tweets or streams show up faster), then the server side code could render data, but it MUST use the same HTTP/rest service that third party clients do.
This has some benefits: first, client side rendering has been shown to reduce server load overall, which is good for podmins. Second, it helps keep the API stable and updated, since the reference web frontend is dogfooding it. Third, doing things this way means we won’t get feature drift between the API and the web front-end. While it might be easier to touch the database directly and just render the page from rails when implementing a new feature, it means that functionality can’t be used by other clients. It might take a little longer to implement a new feature into the API first, then consume it from the web app, in the end I think the developer ecosystem will greatly benefit. Going forward, if diaspora remains successful, i think more and more people are going to want to access it from native mobile apps on their platform of choice.
- Yes: 1
- Abstain: 1
- No: 3
- Block: 0
Note: This proposal was imported from Loomio. Vote details, some comments and metadata were not imported. Click here to view the proposal with all details on Loomio.
Twitter actually renders more on the server side now than they did several months ago, that’s why there’s no more #! in the URLs (they didn’t switch to the HTML5 History API, they went back to server side rendering), and… to me at least… it’s much faster and more responsive than it used to be. Part of the reason they went back to rendering so much of their UI on the server was the complaints they were getting about how slow and unresponsive the site was.
Even GitHub, with its nice AJAX UI, is rendering on the server side and using PJAX to only send the part of the page that changed.
I’m not against client side rendering, but it does have disadvantages to go along with the advantages you mentioned. In fact, according to at least one person here, there were complaints from users when Diaspora started using the Backbone (client side rendered) UI that it’s currently using.
I’m not sure either. First of all working on scaling vertically (many users per installation) instead of horizontally (many pods with not so many users) is one reason we’re in the state we’re currently are.
Then the problems I see is authorization, providing a dedicated API makes it a lot easier to handle different levels of permissions for apps, the webfrontend being a core part of Diaspora always needs access to everything. By unifying the APIs for Backbone and a possible OAuth/client API (it doesn’t exists yet!) we might easily leave loopholes in the authorization system. Then the backbone API as it currently is provides a lot of convenience stuff for the webfrontend, it isn’t clean and well organized at all.
If an API is done I’m pretty certain that we’ll stick with acts_as_api, so both definitions, for backbone and for all versions of the client API we’ll be close together in the code, just the controllers will be different. Might be wrong here and there are better ways to do.
The last concern I have is about versioning, keeping them separate allows us to iterate faster on the webfrontend, since for that we provide both, the server and the client. For the client API on the other hand I’d like to see something well defined that is versioned so that we can introduce big changes while being able to keep an compatibility layer for some time, not sending all outdated clients to hell.
Before you vote no, note the naming of this proposal is somewhat misleading. As a point of clarification, I mentioned that I’m not completely against server side rendering. Rather, I’m saying we should use the exact same API for server side rendering that we do on both the client side code in the web front end and for third party clients. I am aware that there are cases where server side rendering has practical performance benefits. But the server side code should get the data in a way that ensures the same data/actions are available to third party clients.
With regards to security, having a single interface into the API for federation and authentication on both the web frontend and for client applications means you have only one attack vector instead of two. With two points of failure, fixing a security issue for the frontend fixes it for the client API, and the other way around. However the question of authentication is orthogonal to this proposal, and so not necessarily included (I don’t know enough about the topic to make a proposal).
With regards to versioning, the API should be versioned and pods should be able to support multiple versions of the API simultaneously. It’s up to the podmin how many revisions back to support, but this would allow client applications to be built on a stable API while allowing rapid enhancement of the API on a different endpoint for the web front end and for clients apps that want to be bleeding edge.
I’m more familiar with ASP.net mvc butt it is similar to rails at the 10,000 foot level. What I’ve done in the past is use semantic versioning in the Uri scheme, so each version of the REST API that I still supported got its own endpoint. When I need to render data on the server, I do so by instantiating a webapi controller and working with that. That way I avoid the extra overhead of json serialization, and I can be reasonably sure that whatever I use could also be used from the client side (the methods of a web API controller map directly to HTTP requests). I suspect something similar could be implemented on rails.
@jonne Authorization might not be too hard, Twitter had almost their entire frontend running as an API client for a while (and they still use it to update the page without reloading). I’d think cookie + same origin (no CORS headers or JSON-P) would indicate that the call is coming from the web frontend. Or it could be done the way FourSquare does it, where the frontend is “just another app” with an OAuth token. Of course that brings up another potential problem. An API endpoint, by its nature, needs to have CSRF protection disabled… while the web frontend should have CSRF protection enabled. That might not be as much of an issue if we go the FourSquare route and make the frontend use OAuth.
Just a follow-up…the idea of “server-side rendering”, even as a misnomer, is a big buzz word right now and a lot of people don’t really know the implications of client-side rendering as a standard in your app.
Client-side rendering introduces many new problems. One thing I don’t want to do with this project is have it rely on multiple implementations of a language like this. When you work with Mustache/Handlebars, or some other kind of logic-less templating engine, you have to re-think a lot of your app’s code design and you generally run into issues that are initialized by using the same template on multiple engines. Unless we want to duplicate all of the Haml views into Mustache templates, or use Haml-JS, we’re going to need to share those templates on both implementations of the codebase. Not only that, but each browser has its own JS implementation, so we’re really talking about sharing the templates on around 5 different implementations of JS in the same app. Holy shit.
Thankfully, by using Rails we get access to its stellar caching system. 37signals uses the shit out of this caching (they called their technique “russian doll caching” because of the similarities between the doll-in-a-doll toy/sculpture and the way each nested partial is cached at different points). If we are to revisit this issue in the future, I strongly suggest serving small pieces of HTML with any JS we may need to run in the success() callback of that request. It’s a much more reliable method and has negligible speed differences with serving JSON and rendering that data into HTML on the client side.
I think we shouldn’t forget that the most-used parts of the D* UI are already rendered on the client side…
Building upon that base of a ‘sort-of-API’ for the actual API seems self-evident.