# Selfmade SPSS Frequency Analyses in R

I have been an intensive SPSS user since my time as psychology student. Accompanying me across all versions during this time were the simple brief commands in order to show me descriptive statistics. These short commands quickly become second nature and thus enable a fast data viewing.

Currently, my tool focus is on R. It is an excellent alternative but despite extensive experience with this open source tool, I still feel a lack of the usability of SPSS. I simply miss my short commands. However, it is relatively easy to add simple commands similar to SPSS as functions in R yourself.

## Short commands in SPSS

### SPSS-FREQUENCIES

For example, let us look at the frequencies command. In SPSS, it is sufficient to use only the first 3 to 4 letters for commands as long as they are unambiguous. E.g. "FREQ category." displays the absolute frequency, the frequency in percent, the frequency in percent of all valid cases without missing values and the cumulated frequency in percent for each distinct value for the column "category". This is very useful for an explorative data analysis as I can see at one glance:

• Which are the most frequent or the rarest values?
• Which share do these values have in all data and/or the valid basis?
• Which share do the top three categories hold together? If one then adjusts the command, e.g. "FREQ category /FORMAT AFREQ.", the frequencies are presented in ascending instead of descending order.

## FREQUENCIES-Replica in R

Well ..., that is exactly what I would like to have in R.

The command can be realized via a user-defined function as follows:

``````freq = function(x, sort='dfreq'){
df = as.data.frame(table(x, useNA = 'always'))
if (sort == 'dfreq'){
df = df[order(-df\$Freq),]
}else if (sort == 'afreq'){
df = df[order(+df\$Freq),]
}else if (sort == 'dvalue'){
df = df[order(df\$x,decreasing = TRUE),]
}else if (sort == 'avalue'){
#do nothing, use table default
}
names(df)<-c("value","freq")
df\$perc<-df\$freq/sum(df\$freq)*100
df\$perc_val<-df\$freq/sum(df\$freq[!is.na(df\$value)])*100
df\$perc_val[is.na(df\$value)]<-NA
perc_cum_v = c()
perc_cum = 0
for(value in df\$perc){
perc_cum = perc_cum + value
perc_cum_v = c(perc_cum_v,perc_cum)
}
df\$perc_cum = perc_cum_v df
}``````

What is happening here?

• First of all, the simple frequency is retrieved via the table command and pressed into a data frame.
• The contents are arranged depending on the parameter "order".
• Calculations via the entire frequency vector follow in order to determine the percentages.
• Finally, the vector perc_cum_v is built up with the accumulated percentages. For this purpose, the result is run in a loop.

I really like the final result! I will probably integrate further functionalities later, e.g. a "limit" command in order to display only the top x categories etc.  But in the meantime, the new function "freq" has already made my job much easier :-).

The example below shows a result of the function. I have deliberately decided to let the function differ from SPSS in a way that the missing values are taken into account in the accumulated percent (perc_cum), as I need it that way more often.

### Example Output in R   