Hello. My name is John Pocknell, Toad for Oracle product manager. And welcome to this series on enabling agile database development using Toad for Oracle.
In this final step, Step 10, we're going to talk about how to execute workflow automations, such as PL/SQL unit testing and code reviews, on the Toad Intelligence Central server and how to integrate it with the continuous integration process.
So why is this important? Well, once you've created your PL/SQL unit test execution and code review automations and published them, they can be treated independently from a Toad client. They were initially created on a Toad client. Now they're separate.
Once these automation apps are created and stored on a server, they can be executed and scheduled on a server, or called from a Jenkins, Hudson, or similar continuous integration server as part of an automated build process.
Any reports that are generated as a result of a code review or a unit test execution are generated on the Toad Intelligence Central server. And the TIC web server enables anyone to view a code quality dashboard representing the overall quality of a development project over time. Or they're able to drill down into particular details.
So let's get into a demo and see how this works. All right. So we're back into the Toad UI at the point we left it in the previous video where we created an app, and we published it to Toad Intelligence Central. The Intelligence Central flyout window is over on the left-hand side here. And at the top you'll see a hyperlink, which launches the web console. We'll look at that in a second.
Underneath that there are project folders. And inside the project folders there are artifacts, files that have been published from Toad for Oracle.
So we had published the automations on the right into a folder on Toad Intelligence Central. And if I right click this-- and you can see on the left-hand side the Intelligence Central flyout window, with a list of items at the top, is a hyperlink, which is the web console. So that's going to open a web report showing you a number of different things inside of Toad Intelligence Central.
I'll just show you what that looks like. So when you click that hyperlink, this is what you get. You'll see Toad Intelligence Central. If I go to the home page, you'll see a list of all of the things that are on Toad Intelligence Central-- when they were published, who they were published by, what they were. And so you see right at the bottom is the app that I just published in the previous video, sitting inside of this folder here. OK.
If I go to Reports, what this is going to show is just-- it's like an administration console, showing you activity that's happened over a particular time period. In this case, the last 30 days. It's showing you what's trending. It's showing you what has been used. So it's essentially items that have been published, reports that have been run, such what.
And on the right-hand side, you can see a list of files that have been published.
OK. If we go down to the Toad for Oracle piece, the first level is the overall code quality. And this is done at the database level initially. And then you can drill down to the databases. The databases that I'm using, if I click on this one, it shows you then the overall code quality on that database. OK.
And you can see here we're looking at various high level metrics-- Toad quality, SCI metrics. We're seeing here Toad Code rating. And we're showing overall code test results, where we have passes, where there are fails. We have a failure here actually.
We look at the schema at the bottom here. There are two schemas that are in use on this database. And this second one is showing that only 92.86 tests have passed. Let's drill into that. OK. This is a granular view now looking at that one schema. And you can see here that we've got some failures.
If we go to the left-hand side, again, below Code Quality we can then drill into particular aspects of a project. So Code Analysis will look at the code review status. Code Tests will look at the unit testing status.
So Code Analysis, first of all. Let's select the XE Database again. And the Quest Perf Schema, which is the one that we've been using for these demos. You can see here an overall rollup report of the code quality. So we're looking at green.
But there's a couple of aspects here where you don't have green check marks. You've got some exclamation marks. And that could be indicative of some issues. So you can drill into there, and we can look at some of the particular metrics in more detail.
So here for example, we're looking at trending of the Toad Code rating indicator. That's actually fairly steady. Look at some of these other metrics, like Halstead and Maintainability. They're, again, fairly steady. If we see trends down or trends up, that's indicative of quality either improving or getting worse. OK? So the colored markers on the left here go from green to yellow to red. Obviously, if you have any red issues, that's really bad.
Similarly, Code Tester. So again, Code Tester, click on the database. Quest Perf is the only schema for which I have unit tests in here. I can go and look at this one here. Look to see whether the tests all passed or failed. This one here, actually the tests all failed. So that would be something I would need to check out.
So you get a sense of how it works. It's showing you data in a central place. But the important aspect is this idea of having an overall top level code quality dashboard that's giving you an