Hello! Thank you all for very useful comments describing the database and Hydra use cases! They were very helpful for me on this week. I've just committed a new version of the interface. I've implemented the first feature and create a more friendly interface based on bootstrap. I had to add new database requests: db-get-evaluations-count, db-get-evaluations-info for the feature. I have added new endpoints: ("jobset" name), ("eval" id) and changed "status" endpoint to "/". Now, when you launch Cuirass you can see all specifications on the main page (localhost:PORT/); when you click on the specification name you can see a list of all evaluations of a specification displaying numbers of successful, failed and pending builds for each evaluation; when you click on the evaluation ID you can see a list of builds with their statuses. The evaluation list is broken down into a set of pages with 20 evaluations on each page. I have implemented a page navigation tool which may be used for other pages of this kind that we will implement later. Could you please take a look at the commit and new functions? I am still facing the local testing issue. When I tried to launch Cuirass with the large database you sent before it crashed with some git error. For now, I change my local database manually for testing. I still do not have an idea how I should fix this problem. Maybe you could recommend me some specifications to add to my local database? Also, maybe you have some remote server with working Cuirass where I would be able to test the interface? I've attached some illustrations of the interface pages I have locally. Now I am going to implement separate pages for builds with different statuses and implementation of the first feature will be finished. Also, I think It will be useful if I add some more navigation buttons to the header. Now it has only one link to the main page with Guix logo. Best regards, Tatiana 2018-06-06 21:02 GMT+03:00 Danny Milosavljevic : > Hi Tatiana, > > > I afraid that I am not familiar with typical Hydra use cases > > Generally, the continuous integration process should enable developers > to get feedback about the effects of their changes. > > This means that as soon as a commit is made, usually an evaluation of > the build source on the continuous integration server starts. > > (Sometimes there are exceptions to this (for example in order to not > overload > the build servers) - but generally it's true) > > For a new evaluation, as a developer I'd like to know: > > * Are now more packages broken than before? Which ones? > * Are now more packages working than before? Which ones? > * Do some packages work on more architectures than before? Fewer? > * Is the build server still building my change? Or is it done and I > can trust that the information I see is now complete? If not, > what is it building now or later? > > "before" means "with the previous evaluation" or "with some specific past > evaluation" or "in another branch". > > I think this would be the most basic functionality. > > More advanced functionalities would include automatic tracking on the > reason > of the failure: > > * If it's dependency failure, specifically mark this package so I know I > don't > have to fix this package - I have to fix a package this one depends on > (which one?). > * What kind of failure is it? What's the latest non-noise error message in > the > build log? Display suggestions on what to do about it. > > What do you think? > > >4. Add additional information about previous builds (latest successful, > >first broken, etc) on this build page. For this feature, we need to extend > >database requests functionality. > > Sounds good. >