Skip to contents

ebm() fits an exemplar-based model.

  • gcm() fits a generalized context model (aka. exemplar model) for discrete responses (Medin & Schaffer, 1978; Nosofsky, 1986)

  • ebm_j() fits an exemplar-based judgment model for continuous responses (Juslin et al., 2003)

Usage

gcm(
  formula,
  class,
  data,
  choicerule,
  fix = NULL,
  options = NULL,
  similarity = "minkowski",
  ...
)

ebm_j(
  formula,
  criterion,
  data,
  fix = NULL,
  options = NULL,
  similarity = "minkowski",
  ...
)

mem(formula, criterion, data, choicerule, options = NULL, ...)

ebm(formula, criterion, data, mode, fix = NULL, options = NULL, ...)

Arguments

formula

A formula, the variables in data to be modeled. For example, y ~ x1 + x2 | x3 + x4 models response y as function of two stimuli with features x1, x2 and x3, x4 (respectively). Lines | separate stimuli.

class

A formula, the variable in data with the feedback about the true class/category. For example ~ cat. NAs are interpreted as trials without feedback (partial feedback, see details).

data

A data frame, the data to be modeled.

choicerule

A string, the choice rule. Allowed values, see cm_choicerules(): "none" is no choice rule, "softmax" is soft-maximum, "luce" is Luce's axiom.

fix

(optional) A list with parameter-value pairs of fixed parameters. If missing all free parameters are estimated. If set to "start" all parameters are fixed to their start values. Model parameter names depend on formula, class and can be x1, x2, lambda, r, q, b0, b1 (see details - model parameters).

  • list(r = 2.70) sets parameter r equal to 2.70.

  • list(r = "q") sets parameter r equal to parameter q (estimates q).

  • list(q = "r", r = 2.70) sets parameter q equal to parameter r and sets r equal to 2.70 (estimates none of the two).

  • list(r = NA) omits the parameter r, if possible.

  • "start" sets all parameters equal to their initial values (estimates none). Useful for building a first test model.

options

(optional) A list, list entries change the modeling procedure. For example, list(lb = c(k=0)) changes the lower bound of parameter k to 0, or list(fit_measure = "mse") changes the goodness of fit measure in parameter estimation to mean-squared error, for all options, see cm_options.

similarity

(optional) A string, similarity function, currently only "minkowski".

...

other arguments, ignored.

criterion

A formula, the variable in data with the feedback about the continous criterion. For example, ~ val NAs are interpreted as trials without feedback (partial feedback, see details).

mode

(optional) A string, the response mode, can be "discrete" or "continuous", can be abbreviated. If missing, will be inferred from criterion.

discount

A number, how many initial trials to not use during parameter fitting.

Value

Returns a cognitive model object, which is an object of class cm. A model, that has been assigned to m, can be summarized with summary(m) or anova(m). The parameter space can be viewed using pa. rspace(m), constraints can be viewed using constraints(m).

Details

The model can predict new data - predict(m, newdata = ...) - and this is how it works:

  • If newdatas criterion or class variable has only NAs, the model predicts using the originally supplied data as exemplar-memory. Parameters are not re-fit.

  • If newdata's' criterion or class variable has values other than NA, the model predicts the first row in newdata using the originally-supplied data as exemplars in memory, but predictions for subsequent rows of newdata use also the criterion values in new data. In other words, exemplar memory is extended by exemplars in new data for which a criterion exists. Parameters are not re-fit.

Model Parameters

The model has the following free parameters, depending on the model specification (see npar()). A model with formula ~ x1 + x2 has parameters:

  • x1, x2 (dynamic names) are attention weights, their names correspond to the right side of formula.

  • lambda is the sensitivity, larger values make the similarity decrease more steeply with higher distance metric.

  • r is the order of the Minkowski distance metric (2 is an Euclidean metric, 1 is a city-block metric).

  • q is the shape of the relation between similarity and distance, usually equal to r.

  • In gcm():

    • b0, b1 (dynamic names) is the bias towards categories, their names are b plus the unique values of class. For example b0 is the bias for class = 0.

    • If choicerule = "softmax": tau is the temperature or choice softness, higher values cause more equiprobable choices. If choicerule = "epsilon": eps is the error proportion, higher values cause more errors from maximizing.

Partial Feedback

Regarding NA values in class or criterion: The model takes NA values in the class/criterion variable as trials without feedback, in which a stimulus was shown but no feedback about the class or criterion was given (partial feedback paradigm). The model predicts the class or criterion for such trials without feedback based on the previous exemplar(s) for which feedback was shown. The model ignores the trials without feedback in the prediction of the subsequent trials.

References

Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85, 207-238. http://dx.doi.org/10.1037//0033-295X.85.3.207

Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. http://dx.doi.org/10.1037/0096-3445.115.1.39

Juslin, P., Olsson, H., & Olsson, A.-C. (2003). Exemplar effects in categorization and multiple-cue judgment. Journal of Experimental Psychology: General, 132, 133-156. http://dx.doi.org/10.1037/0096-3445.132.1.133

See also

Other cognitive models: baseline_const_c(), bayes(), choicerules, cpt, hm1988(), shift(), shortfall, threshold(), utility

Author

Jana B. Jarecki, jj@janajarecki.com

Examples

# Make some fake data
D <- data.frame(f1 = c(0,0,1,1,2,2,0,1,2),     # feature 1
                f2 = c(0,1,2,0,1,2,0,1,2),     # feature 2
                cl = c(0,1,0,0,1,0,NA,NA,NA),  # criterion/class
                 y = c(0,0,0,1,1,1,0,1,1))     # participant's responses

M <- gcm(y ~ f1+f2, class= ~cl, D, fix="start",
         choicerule = "none")                  # GCM, par. fixed to start val.

predict(M)                                     # predict 'pred_f', pr(cl=1 | features, trial)
#> [1] 0.5000000 0.0000000 0.6123276 0.3228975 0.2359031 0.4526775 0.3258317
#> [8] 0.3598675 0.3258317
M$predict()                                    # -- (same) --
#> [1] 0.5000000 0.0000000 0.6123276 0.3228975 0.2359031 0.4526775 0.3258317
#> [8] 0.3598675 0.3258317
summary(M)                                     # summary
#> 
#> Model:
#>   with no choice rule
#> Call:
#> y ~ f1 + f2
#> 
#> No Free Parameters
#> 
#> Fit Measures:
#> MSE: 0.33, LL: -7.5, AIC: 15, BIC: 15
#> 
anova(M)                                       # anova-like table
#> Sum Sq. Table
#>  N Par Sum Sq Mean Sq
#>      0 2.9373 0.32636
logLik(M)                                      # Log likelihood
#> 'log Lik.' -7.545741 (df=0)
M$logLik()                                     # -- (same) --
#> 'log Lik.' -7.545741 (df=0)
M$MSE()                                        # mean-squared error
#> [1] 0.326362
M$npar()                                       # 7 parameters
#> [1] 0
M$get_par()                                    # parameter values
#>     f1     f2 lambda      r      q     b0     b1 
#>    0.5    0.5    0.5    1.5    1.5    0.5    0.5 
M$coef()                                       # 0 free parameters
#> NULL


### Specify models
# -------------------------------
gcm(y ~ f1 + f2, class = ~cl, D, 
    choicerule = "none")                          # GCM (has bias parameter)
#> Fitting free parameters [f1, lambda, r, q, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none")
#> 
#> Free parameters: estimates 
#>     f1  lambda       r       q      b0  
#>  0.001   0.001   1.000   1.000   0.175  
#> 
#> Constrained and fixed parameters:
#>   f2    b1  
#> 1.00  0.82  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
ebm(y~f1+f2, criterion=~cl, D, mode="discrete",
    choicerule = "none")                          # -- (same) --
#> Fitting free parameters [f1, lambda, r, q, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> Exemplar model - multiplicative - minkowski | choice rule: none
#> Call:
#> ebm(formula = y ~ f1 + f2, criterion = ~cl, data = D, mode = "discrete",  ...
#> 
#> Free parameters: estimates 
#>     f1  lambda       r       q      b0  
#>  0.001   0.001   1.000   1.000   0.175  
#> 
#> Constrained and fixed parameters:
#>   f2    b1  
#> 1.00  0.82  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
ebm_j(y ~ f1 + f2, criterion = ~cl, D)              # Judgment EBM  (no bias par.)
#> Fitting free parameters [f1, lambda, r, q] by minimizing mse with solnp.
#> Exemplar-based judgment - multiplicative - minkowski | choice rule: none
#> Call:
#> ebm_j(formula = y ~ f1 + f2, criterion = ~cl, data = D)
#> 
#> Free parameters: estimates 
#>     f1  lambda       r       q  
#>  0.001   0.523   1.000   1.000  
#> 
#> Constrained and fixed parameters:
#>   f2  
#>    1  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
ebm(y~f1+f2, criterion=~cl, D, mode="continuous")   # -- (same) --
#> Fitting free parameters [f1, lambda, r, q] by minimizing mse with solnp.
#> Exemplar model - multiplicative - minkowski | choice rule: none
#> Call:
#> ebm(formula = y ~ f1 + f2, criterion = ~cl, data = D, mode = "continuous")
#> 
#> Free parameters: estimates 
#>     f1  lambda       r       q  
#>  0.001   0.523   1.000   1.000  
#> 
#> Constrained and fixed parameters:
#>   f2  
#>    1  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)


### Specify parameter estimation
# -------------------------------
gcm(y~f1+f2, ~cl, D, fix=list(b0=0.5, b1=0.5),
     choicerule = "none")                       # fix 'bias' par. to 0.5, fit 5 par
#> Fitting free parameters [f1, lambda, r, q] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Free parameters: estimates 
#>     f1  lambda       r       q  
#>  0.001   0.493   1.000   1.000  
#> 
#> Constrained and fixed parameters:
#>   f2    b0    b1  
#>  1.0   0.5   0.5  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
gcm(y~f1+f2, ~cl, D, fix=list(f1=0.9,f2=0.1),
     choicerule = "none")                       # fix attention 'f1' to 90 %  f1 & fit 5 par
#> Fitting free parameters [lambda, r, q, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Free parameters: estimates 
#> lambda       r       q      b0  
#>  0.001   1.524   1.000   0.175  
#> 
#> Constrained and fixed parameters:
#>   f1    f2    b1  
#> 0.90  0.10  0.82  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
gcm(y~f1+f2, ~cl, D, fix=list(q=2, r=2),
     choicerule = "none")                      # fix 'q', 'q' to 2 & fit 5 par
#> Fitting free parameters [f1, lambda, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Free parameters: estimates 
#>     f1  lambda      b0  
#>  0.999   0.001   0.175  
#> 
#> Constrained and fixed parameters:
#>    f2      r      q     b1  
#> 0.001  2.000  2.000  0.825  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
gcm(y~f1+f2, ~cl, D, fix=list(q=1, r=1),
     choicerule = "none")                      # fix 'q', 'r' to 1 & fit 5 par
#> Fitting free parameters [f1, lambda, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Free parameters: estimates 
#>     f1  lambda      b0  
#>  0.001   0.001   0.175  
#> 
#> Constrained and fixed parameters:
#>   f2     r     q    b1  
#> 1.00  1.00  1.00  0.82  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
gcm(y~f1+f2, ~cl, D, fix=list(lambda=2),
     choicerule = "none")                      # fix 'lambda' to 2 & fit 6 par
#> Fitting free parameters [f1, r, q, b0] by maximizing loglikelihood (binomial pdf) with solnp.
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Free parameters: estimates 
#>   f1     r     q    b0  
#> 0.52  1.00  1.00  0.16  
#> 
#> Constrained and fixed parameters:
#>     f2  lambda      b1  
#>   0.48    2.00    0.84  
#> 
#> ---
#> Note:  View constraints by constraints(.), view parameter space by parspace(.)
gcm(y~f1+f2, ~cl, D, fix="start", 
    choicerule = "none")                        # fix all par to start val. 
#> GCM - multiplicative - minkowski | choice rule: none
#> Call:
#> gcm(formula = y ~ f1 + f2, class = ~cl, data = D, choicerule = "none",  ...
#> 
#> Constrained and fixed parameters:
#>     f1      f2  lambda       r       q      b0      b1  
#>    0.5     0.5     0.5     1.5     1.5     0.5     0.5  
#> 
#> ---
#> Note:  No free parameters. View constraints by constraints(.), view parameter space by parspace(.)