September 3 Johannes Schmitt schmittjoh

Inspection Pages Reloaded

Scrutinizer has undergone quite a few changes recently. With all the information that is collected for your project, it was a bit overtime to adjust the different inspection pages to better display the information.

New Summary Page

Besides basic information like the duration of the inspection, and the used analysis tools, the new summary page now also makes the inspected commit range easily accessible. That way you know exactly which code changes were inspected.

In addition, the summary page contains several graphs. One displays the found issues grouped by labels and severity as well as another graph which shows the number of newly found, fixed and existing issues in relation.

Last but not least, the code ratings are now much easier accessible. For once, the overall quality score is now directly displayed on the inspection page. While the quality score only reflects the current state of the project, the second graph shows you exactly how many classes/methods improved and how many deteriorated.

https://d2hs8c246tsqgl.cloudfront.net/blog/inspection-pages-reloaded/1.jpg

More detailed information can then be found on the different inspection specific sub-pages.

New Issues Page

So far, we always displayed all issues on a single page. If you have a couple of issues this can get quite confusing. The new issues page allows you to filter issues by label and severity. Filtering is partially implemented on the client-side to provide you with a super-fast UI as you navigate between files.

https://d2hs8c246tsqgl.cloudfront.net/blog/inspection-pages-reloaded/2.jpg

Happy quality coding! :)


August 19 Johannes Schmitt schmittjoh

Improved Code Structure View

In the initial release, the code structure view showed the average rating and the worst rated classes and methods. We now added a lot more depth to it:

https://d2hs8c246tsqgl.cloudfront.net/blog/improved-code-structure-view/1.png

As you can see, the different classes and methods are now linked and will take you to a more detailed view. In addition to the worst rated elements, we also added some more common searches like for example the least tested classes, or the least tested operations compared to complexity.

The New Details View: Clean and Expandable

The new details view is kept clean and only shows minimal information like some selected metrics by default. However, additional data can be displayed by expanding the respective sections as needed.

https://d2hs8c246tsqgl.cloudfront.net/blog/improved-code-structure-view/2.png

Smart folding allows you to get a quick overview and then to dig in deeper:

https://d2hs8c246tsqgl.cloudfront.net/blog/improved-code-structure-view/3.png

Enjoy these updates! :)


July 29 Johannes Schmitt schmittjoh

Code Rating System Released!

As you work on software projects, you often have some areas in the project which suffer from technical debt. Maybe there was a deadline which had to be met, and a feature had to be implemented a bit more unconventionally. Often, there are multiple areas which suffer from this. So, where should you start to pay your debt down and how much is it anyway?

To address these questions, we are very excited to announce the immediate availability of our new code rating system. The new ratings - which are available for all future inspections - pinpoint the parts of your software that require your attention most. Besides, they also allow to track the progress of your project not only for developers but also for project managers easily.

Criteria and Scores

In our rating system, we rate the design of your code. Design problems manifest themselves mainly in the form of code duplication, unclearity and complexity. These three factors are measured in the form of software metrics that we collect for your code (see used tools). For example, one metric that is used to assess complexity is cyclomatic complexity.

All code elements are rated on a scale from 0 (worst) to 10 (best). Besides, we also use the following attribute classes:

Class Interval
very good [8, 10]
good [6, 8)
satisfactory [4, 6)
pass [2, 4)
critical [0, 2)

Tangible Assessment, your Quality Score

In addition to the individual ratings of code elements (like methods), we also compute a weighted average for your entire project, the Quality Score.

This average is a good assessment of the overall quality of your project and its technical debt. On one hand this is interesting for yourself, as it allows you to track overall progress. On the other hand, it is also very interesting for developers who think about adopting your library/framework and helps them to make a more informed decision.

If you would like to share your quality score, we also provide a new shiny badge for you: badge

Finding Problem Areas

We also added a new code structure view page, which gives you a quick overview over the problematic areas of your project.

Code Structure Overview

While we only show the areas at this point, in future versions we will add more detailed views and suggestions how to fix them.

We hope that these new features enable the community to write even better software; something that we all will benefit from. Help us spread the word.

That’s it so far. Enjoy and tweet us your quality score! :)


July 26 Johannes Schmitt schmittjoh

Badges for your Repositories

The latest addition of metrics allows us to roll out several new features. As a first feature, we added two badges for you which you can place in your README files - of course provided that you like our service.

The first one is a general badge which you can use to link to your repository on Scrutinizer.

https://d2hs8c246tsqgl.cloudfront.net/blog/badges-for-your-repositories/1.png

The second badge allows you to display your code coverage information:

https://d2hs8c246tsqgl.cloudfront.net/blog/badges-for-your-repositories/2.png

You can find the links in Markdown and some more formats on your repository’s overview page. A special thanks also to icecave studios who provided these cool badges :)


July 22 Johannes Schmitt schmittjoh

Semantic Diffs and Activity Streams

Scrutinizer collects a lot of data for your project ranging from concrete issues, over patches to raw metrical data. So far, Scrutinizer has always shown you the current state of your project in the form of inspection results.

As you start using Scrutinizer, inspections might contain a lot of issues. So many that you cannot fix them right away, but need to work on them bit by bit over several sprints. To allow you to stay productive and use all of Scrutinizer’s analyses without drowning in information overflow, Scrutinizer now supports the wonderful concept of semantic diffs.

What is a Semantic Diff?

A semantic diff is much like a diff as you know it from Git, but instead of comparing the changes in terms of lines removed/added, the the structure of your project is compared. This allows Scrutinizer to point out exactly which issues are created, changed or fixed by specific code changes. Since an image says more than a thousand words, let us take a look:

Semantic Diff

In the above case, two issues have changed. As you can see for changed issues, Scrutinizer not only shows you the issues that changed, but also points out how they changed. In the above cases, both the number of methods and the complexity increased.

Even if you cannot fix the issues right away, semantic diffs give you a good indication on where your project is headed.

Activity Streams

As another feature to help you to stay on top of your project and view its progress, we now generate activity streams both for your repository and also individually for each user:

Repository Activity Stream

We already found these features very helpful in the development of Scrutinizer itself. Besides it is also a very rewarding experience to focus on progress and see the number of issues dwindling from commit to commit.

We hope you love them as well!