From 7dc75852b9aa1bc6f388018dbdaf070865d2fe27 Mon Sep 17 00:00:00 2001 From: Koen Derks Date: Wed, 5 Aug 2020 09:24:33 +0200 Subject: [PATCH] Update cran-comments --- README.md | 2 +- cran-comments.md | 11 +- vignettes/auditWorkflow.html | 505 ----------------------------------- 3 files changed, 4 insertions(+), 514 deletions(-) delete mode 100644 vignettes/auditWorkflow.html diff --git a/README.md b/README.md index 12e8d4ffd..5eaa272e5 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ The `jfa` package can then be loaded in RStudio by typing: library(jfa) ``` -Examples can be found in the package [vignette](./vignettes/auditWorkflow.html). +Examples can be found in the package [vignette](https://cran.r-project.org/package=jfa/vignettes/auditWorkflow.html). ## Functions diff --git a/cran-comments.md b/cran-comments.md index b91356a65..abdf77a0b 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -1,12 +1,7 @@ -## Version 0.2.0 -This is jfa version 0.2.0. In this version I have: +## This is a resubmission for version 0.2.0 +This is jfa version 0.2.0 (resubmission). In this version I have: -* Implemented prior construction methods `none`, `median`, `hypotheses`, `sample`, and `factor` in the `auditPrior()` function. -* Implemented `minPrecision` argument in the `planning()` function and the `Evaluation()` function. -* Return the value `mle` and the value of the `precision` from the `Evaluation()` function. -* Implemented `increase` argument in the `planning()` function. -* Implemented more efficient versions of certain algorithms in the `sampling()` function. -* Changed the x-axis labels in the default plot to theta instead of misstatement. +* Removed the non-canonical URL's in the README file and changed them to canonical URL's. ## Test environments * OS X install (on travis-ci), R release diff --git a/vignettes/auditWorkflow.html b/vignettes/auditWorkflow.html deleted file mode 100644 index afff45e07..000000000 --- a/vignettes/auditWorkflow.html +++ /dev/null @@ -1,505 +0,0 @@ - - - - - - - - - - - - - - - -jfa: The audit sampling workflow - - - - - - - - - - - - - - - - - - - - - - -

jfa: The audit sampling workflow

-

Koen Derks

- - -
- -
- -
-

Introduction and scenario

-

This vignette accompanies the jfa R package and aims to show how the it facilitates auditors in their standard audit sampling workflow (hereafter “audit workflow”). In this example of the audit workflow, we will consider the case of BuildIt. BuildIt is a fictional construction company in the United States that is being audited by Laura, an external auditor for a fictional audit firm. Throughout the year, BuildIt has recorded every transaction they have made in their financial statements. Laura’s job as an auditor is to make a judgment about the fairness of these financial statements. In other words, Laura needs to either approve or not approve BuildIt’s financial statements. To not approve the financial statements would mean that, as a whole, the financial statements contain errors that are considered material. This means that the errors in the financial statements are large enough that they might influence the decision of someone relying on the financial statements. Since BuildIt is a small company, their financial statements only consist of 3500 transactions that each have a corresponding recorded book value. Before assessing the details in the financial statements, Laura already tested BuildIt’s computer systems that processed these transactions and found that they were quite reliable.

-

In order to draw a conclusion about the fairness of BuildIt’s recorded transactions, Laura separates her audit workflow into four stages. First, she will plan the size of the subset she needs to inspect from the financial statements to make a well-substantiated inference about them as a whole. Second, she will select the required subset from the financial statements. Third, she will inspect the selected subset and determines the audit value (true value) of the transactions it contains. Fourth, she will use the information from her inspected subset to make an inference about the financial statements as a whole. To start off this workflow, Laura first loads BuildIt’s financial statements in R.

-
library(jfa)
-data("BuildIt")
-
-
-

Setting up the audit

-

In statistical terms, Laura wants to make a statement that, with 95% confidence, the maximum error in the financial statements is lower than what is considered material. She therefore determines that materiality, the maximum tolerable error in the financial statements, as 5%. Based on last year’s audit at BuildIt, where the maximum error turned out to be 2.5%, she expects at most 2.5% errors in the sample that she will inspect. Laura can therefore re-formulate her statistical statement as that she wants to conclude that, when 2.5% errors are found in her sample, she can conclude with 95% confidence, that the misstatement in the population is lower than the materiality of 5%. Below, Laura defines the materiality, confidence, and expected errors.

-
# Specify the materiality, confidence, and expected errors.
-materiality   <- 0.05   # 5%
-confidence    <- 0.95   # 95%
-expectedError <- 0.025  # 2.5%
-

Many audits are performed according to the audit risk model (ARM), which determines that the uncertainty about Laura’s statement as a whole (1 - her confidence) is a factor of three terms: the inherent risk, the control risk, and the detection risk. Inherent risk is the risk posed by an error in BuildIt’s financial statement that could be material, before consideration of any related control systems (e.g., computer systems). Control risk is the risk that a material misstatement is not prevented or detected by BuildIt’s internal control systems. Detection risk is the risk that Laura will fail to find material misstatements that exist in an BuildIt’s financial statements. The ARM is practically useful because for a given level of audit risk, the tolerable detection risk bears an inverse relation to the other two risks. The ARM is useful for Laura because it enables her to incorporate prior knowledge on BuildIt’s organization to increase the required risk that she will fail to find material misstatements. According to the ARM, the audit risk will then be retained.

-

\[ \text{Audit risk} = \text{Inherent risk} \,\times\, \text{Control risk} \,\times\, \text{Detection risk}\]

-

Usually the auditor judges inherent risk and control risk on a three-point scale consisting of low, medium, and high. Different audit firms handle different standard percentages for these categories. Laura’s firm defines the probabilities of low, medium, and high respectively as 50%, 60%, and 100%. Because Laura performed testing of BuildIt’s computer systems, she assesses the control risk as medium (60%).

-
# Specify the inherent risk (ir) and control risk (cr).
-ir <- 1     # 100%
-cr <- 0.6   # 60%
-
-
-

Stage 1: Planning an audit sample

-

Laura can choose to either perform a frequentist analysis, where she uses the increased detection risk as her level of uncertainty, or perform a Bayesian analysis, where she captures the information in the control risk in a prior distribution. For this example, we will show how Laura performs a Bayesian analysis. A frequentist analysis can easily be done through the following functions by setting prior = FALSE. In a frequentist audit, Laura immediately starts at step 1 and uses the value adjustedConfidence as her new value for confidence.

-
# Adjust the required confidence for a frequentist analysis.
-adjustedConfidence <- 1 - ((1 - confidence) / (ir * cr))
-

In a Bayesian audit, Laura starts at step 0 by defining the prior distribution that corresponds to her assessment of the control risk. She assumes the likelihood for a sample of \(n\) observations, in which \(k\) were in error, to be \(\text{binomial}(n, k)\). Using the auditPrior() function, she can create a prior distribution that incorporates the information in the risk assessments from the ARM. For more information on how this is done, see Derks et al. (2019).

-
# Step 0: Create a prior distribution according to the audit risk model.
-priorResult <- auditPrior(materiality = materiality, confidence = confidence, expectedError = expectedError,
-                          method = "arm", ir = ir, cr = cr, likelihood = "binomial")
-

Laura can inspect the resulting prior distribution with the print() function.

-
print(priorResult)
-
## # ------------------------------------------------------------
-## #         jfa Prior Distribution Summary (Bayesian)
-## # ------------------------------------------------------------
-## # Input:
-## #
-## # Confidence:              0.95    
-## # Expected sample errors:  0.025       
-## # Likelihood:              binomial 
-## # Specifics:               Inherent risk = 1; Internal control risk = 0.6; Detection risk = 0.083 
-## # ------------------------------------------------------------
-## # Output: 
-## #
-## # Prior distribution:      beta(2.275, 50.725) 
-## # Implicit sample size:    51 
-## # Implicit errors:         1.275 
-## # ------------------------------------------------------------
-

The prior distribution can be shown by using the plot() function.

-
plot(priorResult)
-

-

Now that the prior distribution is specified, Laura can calculate the required sample size for her desired statement by using the planning() function. She uses the priorResult object as input for the planning() function to use her prior distribution.

-
# Step 1: Calculate the required sample size.
-planningResult <- planning(materiality = materiality, confidence = confidence, expectedError = expectedError, 
-                          prior = priorResult)
-

Laura can then inspect the result from her planning procedure by using the print() function. Her result tells her that, given her prior distribution she needs to audit a sample of 169 transactions so that, when at most 4.23 errors are found, she can conclude with 95% confidence that the maximum error in BuildIt’s financial statements is lower the materiality of 5%.

-
print(planningResult)
-
## # ------------------------------------------------------------
-## #              jfa Planning Summary (Bayesian)
-## # ------------------------------------------------------------
-## # Input:
-## # 
-## # Confidence:              95%      
-## # Materiality:             5% 
-## # Minimum precision:       100% 
-## # Likelihood:              binomial 
-## # Prior:                   beta(2.275, 50.725)
-## # Expected sample errors:  4.23
-## # ------------------------------------------------------------
-## # Output:
-## #
-## # Sample size:            169
-## # ------------------------------------------------------------
-

Laura can inspect how the prior distribution compares to the expected posterior distribution by using the plot() function. The expected posterior distribution is the posterior distribution that would occur if Laura actually observed a sample of 169 transactions, from which 4.223 were in error.

-
plot(planningResult)
-

-
-
-

Stage 2: Sampling from the financial statements

-

Laura is now ready to select the required 169 transactions from the financial statements. She can choose to do this according to one of two statistical methods. In record sampling (units = "records"), inclusion probabilities are assigned on the transaction level, treating transactions with a high value and a low value the same, a transaction of $5,000 is equally likely to be selected as a transaction of $1,000. In monetary unit sampling (units = "mus"), inclusion probabilities are assigned on the level of individual monetary units (e.g., a dollar). When a dollar is selected to be in the sample, the transaction that includes that dollar is selected. This favors higher transactions, as a transaction of $5,000 is five times more likely to be selected than a transaction of $1,000.

-

Laura chooses to use monetary unit sampling, as she wants to include more high-valued transactions. The sampling() function allows her to sample from the financial statements. She uses the planningResult object as an input for the sampling() function.

-
# Step 2: Draw a sample from the financial statements.
-samplingResult <- sampling(population = BuildIt, sampleSize = planningResult, units = "mus", bookValues = "bookValue",
-                           seed = 999)
-

Laura can inspect the outcomes of her sampling procedure by using the print() function.

-
print(samplingResult)
-
## # ------------------------------------------------------------
-## #                  jfa Selection Summary
-## # ------------------------------------------------------------
-## # Input:
-## #      
-## # Population size:         3500 
-## # Requested sample size:   169 
-## # Sampling units:          Monetary units 
-## # Algorithm:               Random sampling   
-## # ------------------------------------------------------------ 
-## # Output:
-## #
-## # Obtained sample size:    169 
-## # Proportion n/N:          0.048 
-## # Percentage of value:     4.864% 
-## # ------------------------------------------------------------
-
-
-

Stage 3: Executing the audit

-

The selected sample can be isolated by indexing the sample object from the sampling result. Now Laura can execute her audit by annotating the sample with their audit value (for exampling by writing the sample to a .csv file using write.csv(). She can then load her annotated sample back into R for further evaluation.

-
# Step 3: Isolate the sample for execution of the audit.
-sample <- samplingResult$sample
-
-# To write the sample to a .csv file:
-# write.csv(x = sample, file = "auditSample.csv", row.names = FALSE)
-
-# To load annotated sample back into R:
-# sample <- read.csv(file = "auditSample.csv")
-

For this example, the audit values of the sample are already included in the auditValue column of the dataset .

-
-
-

Stage 4: Evaluating the sample

-

Using her annotated sample, Laura can perform her inference with the evaluation() function. By passing the priorResult object to the function, she automatically sets method = "binomial" to be consistent with her prior distribution.

-
# Step 4: Evaluate the sample.
-evaluationResult <- evaluation(sample = sample, materiality = materiality, prior = priorResult, 
-                               bookValues = "bookValue", auditValues = "auditValue")
-

Laura can inspect the outcomes of her inference by using the print() function. Her resulting upper bound is 3.518%, which is lower than the materiality of 5%. The output tells Laura the correct conclusion immediately.

-
print(evaluationResult)
-
## # ------------------------------------------------------------ 
-## #             jfa Evaluation Summary (Bayesian)
-## # ------------------------------------------------------------
-## # Input: 
-## #
-## # Confidence:               95%   
-## # Materiality:              5% 
-## # Minium precision:         100% 
-## # Sample size:              169 
-## # Sample errors:            3 
-## # Sum of taints:            1.8  
-## # Method:                   binomial 
-## # Prior distribution:       beta(2.27, 50.73) 
-## # ------------------------------------------------------------
-## # Output:
-## #
-## # Posterior distribution:   beta(4.08, 217.92) 
-## # Most likely error:        1.385% 
-## # Upper bound:              3.518% 
-## # Precision:                2.133% 
-## # Conclusion:               Approve population 
-## # ------------------------------------------------------------
-

She can inspect the prior and posterior distribution by using the plot() function. The shaded area quantifies the area under the posterior distribution that contains 95% of the probability, which ends at 3.518%. Therefore, Laura can state that there is a 95% probability that the misstatement in BuildIt’s statements is lower than 3.518%.

-
plot(evaluationResult)
-

-
-
-

Conclusion

-

Since the 95% upper confidence bound on the misstatement in BuildIt’s financial statements is lower than the materiality, Laura can conclude that the financial statements as a whole do not contain material misstatement. Therefore, she can approve BuildIt’s financial statements.

-
- - - - - - - - - - -