Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the Splunk Python SDK command interface #9

Open
lowell80 opened this issue Nov 9, 2018 · 3 comments
Open

Use the Splunk Python SDK command interface #9

lowell80 opened this issue Nov 9, 2018 · 3 comments
Labels
backwards compatibility Likely to BREAK backwards compatibility
Milestone

Comments

@lowell80
Copy link
Member

lowell80 commented Nov 9, 2018

Switched to using the new "chunked encoding" interface and re-implement the search commands in a more forward-looking way. Note that this will remove support prior to Splunk 6.3, which probably isn't a big deal at this point.

One concern with this is how dynamic fields are dealt with in the StreamingCommand, for example. If a field doesn't exist on the first iteration, then it's omitted from the output entirely. I'm not sure what this is a function of, but this doesn't seem to be a problem with the old-school InterSplunk approach.

@lowell80 lowell80 added the backwards compatibility Likely to BREAK backwards compatibility label Nov 9, 2018
@lowell80 lowell80 added this to the Post 2.0 milestone Nov 9, 2018
@lowell80
Copy link
Member Author

lowell80 commented Nov 22, 2018

Solved one of the 2 big hurdles.

The dynamic fields issue can be solved by using the following statement whenever the fields change from one event to the next:

self._record_writer.flush(finished=False)

Here's a silly example:

    def stream(self, records):
        i = 0
        last_v =0
        for record in records:
            i += 1
            v = i//3
            record["field_{}".format(v)] = "HI"
            yield record
            if v != last_v:
                self._record_writer.flush(finished=False)
                last_v = v

A record output example looks like this:

# gzcat MyCommand-1542855500.795022.getinfo.output.gz
chunked 1.0,50,0
{"required_fields":["f1","f2"],"type":"streaming"}
chunked 1.0,18,127
{"finished":false}f1,__mv_f1,f2,__mv_f2,_chunked_idx,__mv__chunked_idx,field_0,__mv_field_0
hi,,othe,,0,,HI,
hi,,othe,,1,,HI,
hi,,othe,,2,,,
chunked 1.0,18,127
{"finished":false}f1,__mv_f1,f2,__mv_f2,_chunked_idx,__mv__chunked_idx,field_1,__mv_field_1
hi,,othe,,3,,HI,
hi,,othe,,4,,HI,
hi,,othe,,5,,,
chunked 1.0,18,127
{"finished":false}f1,__mv_f1,f2,__mv_f2,_chunked_idx,__mv__chunked_idx,field_2,__mv_field_2
hi,,othe,,6,,HI,
hi,,othe,,7,,HI,
hi,,othe,,8,,,
chunked 1.0,17,93
{"finished":true}f1,__mv_f1,f2,__mv_f2,_chunked_idx,__mv__chunked_idx,field_3,__mv_field_3
hi,,othe,,9,,HI,

Splunk search looked like this: | makeresults count=10 | eval f1="hi", f2="othe", f3=if(random() % 3=0, "blah",null()) | mycmd record=t

Also, I think I've proven out that the fieldnames input can be any string can be anything as long as it's quoted, which is generally expected anyways.

@lowell80
Copy link
Member Author

Hold up -- NEW PROBLEM

The use of the _chunked_idx column apparently doesn't work well if the output chunk is split into more pieces than the input. The output from the search command works okay, but any existing fields NOT passed into the streaming command are lost.

Emergency=done.
I just confirmed that dropping @Configuration(required_fields=["f1","f2"]) does keep this from happening, but this now requires that ALL fields are passed through the search command.

@lowell80
Copy link
Member Author

Revised workaround version.

Returning too many new chunks causes issues. The exact number to chunks where this causes a problem is unknown. However, this logic attempts to minimize the number of chunks by making sure that all fields are accumulative (to handle the issue of sportatic fields). Need better visibility into how the chunking mechanism works under the covers before being able to solve this correctly. Is the issue the total number of chunks, or too many small chunks back-to-back, is there a timing thing involved (if chunks are produced more slowly does that make a difference.)

This probably needs to be taken to the next level and introduce some sort of buffer mechanism that processes hundreds or thousands of results, and then checks to reduce the total number of chunks returned.

The really simple approach would be to simply buffer the 50,000 results in memory all at once, which is what InterSplunk does, and to some degree what the SplunkSDK seems to do as well. (If all results are returned in one chunk (by default), then they are all being buffered in memory anyways. See use of the StringIO buffer.

    def prepare(self):
        #if self.fieldnames:
        #    self.configuration.required_fields = self.fieldnames
        pass

    def stream(self, records):
        known = None
        for record in self._stream(records):
            fields = set(record.keys())
            if known is None:
                known = fields
            elif fields.difference(known):
                # new fields found
                #self.logger.info("New fields found:  %s" % fields.difference(known))
                known.update(fields)
                #self.logger.info("Must missing fields  %s" % known.difference(fields))
                for f in known.difference(fields):
                    record[f] = ""
                self._record_writer.flush(finished=False)
            yield record

    def _stream(self, records):
        # Todo:  Add a check to see if any of the required variable names are missing from the Splunk fields.
        i = 0
        for record in records:
            i += 1
            try:
                record["field_{}".format(i % 10)] = "HI"
                yield record
            except:
                pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backwards compatibility Likely to BREAK backwards compatibility
Projects
None yet
Development

No branches or pull requests

1 participant