This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Intrust Add-in for DNS debug logs

Is there still a way to capture DNS debug logs with Intrust? If so, can you tell me where to find the add-in?

 

Thanks,

Nicole

  • Nicole,

    I understand what you describe in your last comment, but this is not enough, this is too general. I'd like to know who controls the file creation operation and how. Because the default MS solution does not imply such mechanism or I am not aware of it. Do you use a script or a third-party tool for such retention? Are you sure there are no gaps in events when the new file is created?
  • Igor:

    It is the default MS solution for the debug logging. When you go to the properties of the log, you can set the max size of the log which is set to 500MB and then the roll over is set to true via powershell command. There are no gaps as this is in the MS logging properties.

    Please let me know if you need more information.

    Thanks,
    Nicole
  • Oh, you use Set-DnsServerDiagnostics -EnableLogFileRollover 1. The solution I've shared with you does not support this option. I'm afraid we cannot support it quickly, it requires custom dev. Do you really need to keep all these files? Right now I can only suggest to switch back to Set-DnsServerDiagnostics -EnableLogFileRollover 0 and collect the backup file in System32\dns\backup folder as I proposed earlier, by specifying the full path %WinDir%\\Sysnative\\dns\\backup\\dns.log in the data source.
  • I am trying to do a similar task. My concern is that there is no way to ensure gaps will not exist in the data. For example, today a log file fills up once a day and Schedule collection task frequency is 2 times per day. That works well but tomorrow that log file rolls over 4 times yet the scheduled collection task remains at 2 times per day. Gaps will exist in this case..... Any suggestion on how to address this? Thank you.

    Maurice
  • This is a familiar conundrum that we used to face in the days before InTrust "streamed" data to the Repository as it does today.

    Back then, it was trial and error during the initial deployment of the product to determine the sweet spot for data collection frequency.

    Is there really any harm in collecting the data more frequently?
  • No, I could probably do it every hour but that may not guarantee you get all the data on an extremely busy DNS server. Are there any checks to indicate when you hit the point in which the frequency needs to be increased? (similar to warnings you receive for gathering tasks "Data collection was started from the beginning of the event log, because InTrust couldn't find the last gathered event position").
  • I read over the whole thread and something occurred to me:

    Would it be practical for you to implement within InTrust a separate batch job that would collect data from the "extra" log files if they existed?

    So the "regular" collection would work with the current file but you would have another one that would (perhaps) fire once a day to collect data from the "extra" files.

    In my mind, the practicality of this would hinge upon how often you need to produce reports. If it's less than once per day, then I *think* my idea would work?
  • I do not need to generate reports. I need to collect the data in the event a security related activity requires information from these logs. I have not collected files before just strictly eventlog data. I think I need to test it before I have additional questions. Do you have an alternate location for the plug-in? I cannot reach the link referenced earlier in this thread.

    Thanks.