cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Textual Analysis of Earnings Calls: Differences Between the Firms (2022-EU-30MP-1056)

Nilofar Varzgani, Assistant Professor of Business Systems and Analytics, La Salle University
Yusuf Joseph Ugras, Interim Dean and Associate Professor of Accounting, La Salle University
Özgür Arslan Ayaydin, Clinical Professor Department of Finance, University of Illinois Chicago

 

Textual analysis of written documents has become an important analytics tool in accounting and finance decision making. Several research papers have expanded the textual analysis and have also measured the written tone in financial documents converting the tone a quantitative score of optimism/pessimism. Some of the research has connected the tone to explain abnormal returns in the financial markets. We explore this connection with the use of the Sentiment Analysis platform in JMP Pro 16. 

 

 

 

Hello everyone, my name is N ilofar Varzgani.

Today, I'm going to be presenting my research study that I have conducted

using JMP Pro 16, and specifically within JMP Pro 16,

I used the Textual Explorer platform

as well as the Sentiment Analysis functionality

within the Text Explorer platform.

The title of this study is

Textual Analysis of Earnings Conference Calls: Differences Between Firms.

This study that I'm working on is with my two co- authors,

Dr. U gras and Dr. Ayaydin.

They're not going to be presenting with me here today,

but they will surely attend the presentation itself.

I'll start off with a little bit of an introduction.

Textual analysis of written documents has become an important analytics tool

in accounting and finance decision- making.

Several research papers have expanded the textual analysis

and have also measured the written tone in financial documents,

converting the tone into a quantitative score

which measures optimism or pessimism in the tone of the speaker.

Some researchers have also connected this tone

to explain abnormal returns in the financial markets.

In my presentation today,

I'm going to talk about how we try to explore this connection

with the use of the Sentiment Analysis platform within JMP Pro 16.

Let's get started here.

Sorry.

All right.

Let's talk a little bit about the motivation behind this study.

For many years, capital market studies have researched

whether quantitative analytic information reported by firms such as earnings

and revenue, or other accounting measures influence decision making.

Recent studies have shown that in addition to these type of quantitative information,

qualitative analytics from the firms and from the media

influences i nvestor behavior.

This qualitative information includes

texts in 10- K reports, earnings press releases,

conference call transcripts, comment letters to the SEC,

analysts remarks, articles in media, and conversations in social media.

Several studies have also shown the importance of earnings conference calls

that immediately follow the quarterly earnings releases of public companies.

In our study here,

we're examining whether the impact of the earnings conference call tones

varies across different groups of companies.

Let's do a little bit of background

on the literature that already covers textual analysis so far.

Textual analysis has been used to analyze

a variety of documents through alternative approaches,

and one could categorize these approaches into three broad categories.

The first one is the use of the Fog Index,

which is basically a function of two variables:

the average sentence length in number of words,

and the word complexity,

which basically measures the percentage of words with more than two syllables in it.

The second category of techniques is the length of reports,

although this one seems like a rather simplistic approach,

but it has been useful because of the simplistic nature.

There have been a couple of studies which have used the length of report

to proxy as the complexity of the report itself.

The third approach is the use of a word list.

Now, in terms of the word list,

there are a number of word lists that people have created themselves,

such as the Henry word list or the Loughran and McDonald's word list.

But in our study here,

we utilized the inbuilt dictionary that JMP Pro comes with,

and in addition to that inbuilt dictionary,

we augmented that with some phrases, which we added as terms,

as well as a custom list of stop words

based on the sample data that we were working with.

Let's talk a little bit about the data itself.

Our data sample size is approximately 25,000 observations,

which means that we had close to 25,000 earnings call transcripts

that we analyzed, and the date range for those transcripts

is from 2007 Quarter 1 to 2020 Quarter 4.

We have tried to incorporate only the text portion of the earnings transcript call,

removing any graphics or any special characters

that might be a part of the earnings call transcript.

All of these transcripts were downloaded

from the LexisNexis database in RTF format.

Just to give you a little bit of an intro

as to what the earnings call transcript basically looks like.

It starts off with the title, which mentions the name of the company

for which the earnings call announcement is being made.

It has the word Fair Disclosure Wire in the next line, followed by the date.

Then the main call itself is divided into two sections.

You have the presentation section, which are the prepared remarks

of the managerial team, which is attending that call, basically,

and then you also have a discussion between the analysts

who are sitting in on the call live, and they ask questions to the managers,

and the managers respond to those questions.

So the prepared remarks and the QnA portion.

Now, for this study specifically,

we've only looked at the prepared remarks portion of the earnings transcripts,

but later on, as an extension of this study,

we're planning to also incorporate the QnA portion

of the earnings call as well.

In addition to those two blocks of text that are a part of the transcript,

most of these transcripts also include

a list of all the participants who are on the call,

which includes all of the managers from the company side,

as well as all of the analysts from different institutional investor sites.

Let's talk about the methodology a little bit first.

We extracted the transcript

and the prepared remarks of the managers section was titled as the DocB ody.

The Qn A was titled at the discussion,

and we counted the number of analysts who attended each call.

Now, keep in mind that because of the fact that not all calls have a Qn A segment,

the Qn A part might be missing for some of the rows in our sample.

Which is why for this conference and this study,

we've only focused on the DocB ody,

which is the prepared remarks portion of the earnings call.

Then we also created columns which could be used as identifiers.

A ticker column for further analysis, the year quarter,

as well as a calculated column which measures the length in terms

of the number of words in the prepared remarks section.

Now, the distribution of length is interesting and we're going to show

that output in a little bit.

Before we moved on to the T ext Explorer platform,

we changed the data type for the document body to unstructured text-

sorry, the data type character and the modeling type to unstructured text

so that the Textual Explorer platform can work.

Before I show you the Textual Explorer platform,

I just want to talk a little bit about the terminology that is going to be used

a lot during the Text Explorer platform and the output shown.

In textual analytics,

a term or a token is the smallest piece of text similar to a word in a sentence.

You can define terms in many ways, although the use of regular expressions

or the process of breaking down the text into terms is called tokenization.

Another important term that is going to pop up a lot when we see the output

of the textual platform is a phrase which is a short collection of term.

The platform has options to manage phrases

that can be specified as terms in and of themselves.

For example, for our earnings call study,

a phrase which popped up a lot was something like effective tax rate.

Although effective, tax, and rate, are three separate terms,

but effective tax rate being used together most of the time we converted that phrase

into a term itself so that we can analyze how many times

that particular phrase as a whole is being used in these conference calls.

Next is the document.

A document basically refers to a collection of words.

In a JMP data table, the unstructured text

in each row of the text column corresponds to a document.

Then we also have a corpus,

which is basically a collection of all of the documents.

Now, another important term which we're going to use later on during the output

is the stop words.

Stop words are basically common words

which you would want to exclude from analysis.

JMP does come with its own list of stop words,

but there might be specific stop words in the data sample that you are using

which would apply to that data set only.

For us, we created a custom list of stop words

which you can easily view in the Text Explorer platform.

You can have a list of stop words in an excel format or in a txt file

and then upload that within the Text Explorer platform

and use those as stop words.

Then finally is the process of stemming, which is basically combining words

with identical beginnings or stems, so to say, to basically make sure

that the result compiles those similar rooted words as one word.

For example, jump, jumped, jumping, all would be treated as one single word

instead of three separate words.

Now, for our study over here, we decided to go with the no stemming option

because there were some issues with the stemming that we noticed.

For example, some words like ration could be used

as a stem for words like acceleration,

which has nothing to do with that word itself.

So we decided to go with the no stemming option in our case.

This slide over here, we look at the options that we selected

for the Text Explorer platform.

The main variable of interest is the DocB ody.

We use the ID to ID each row of observation,

and then we change the default options for each of these features.

The maximum words per phrase, the maximum number of phrases,

characters per word, maximum characters per word,

to the ones that we thought would be suitable for our particular data set.

A s you can see, that we increased the maximum ranges

to a lot higher than what the defaults are just to be on the safe side.

that we're not missing out any important terms within our analysis.

Our initial output that pops up once you run the Text Explorer platform

gives you a list of all of the terms which were highly used

within the sample of data, as well as the phrases.

We reviewed the list of phrases

and selected the phrases that could be used as terms.

There were a total of 30,000 phrases

out of which 1,068 phrases were added to the term list.

In addition to that, we also created our custom list of stop words.

W e found it easier to basically import

all the terms from our sample from JMP into Excel, sort those words,

and then basically remove everything or count all of those words as stop words

which have certain characteristics, for example, they had symbols or commas

or dollar signs in them, or numbers which were being treated as text

or common names, for example, John, Michael, David, et c.

W e added all of those to our stop word list

and we uploaded that into the Textual Explorer platform.

Let's look at some of our output.

The first analysis that we did was

on the variable length of the prepared remarks.

T he assumption over here is

if the prepared remarks section of the management is longer,

it basically shows that they have more to explain to the investors,

and that is why the complexity or the tone of those reports might be different

for the other reports, which are shorter in length

where the managers don't have to do a lot of explaining.

A s you can see with the distribution output on the left,

the length of our sample data set over here is slightly asymmetric

with tiny tail towards the right hand side,

which means that there were some reports which were longer than the others.

The mean of length is around 3,027 words and the median is around 2,9 66 words.

You can see the mean and the median are not too far away from each other,

which probably means that we can assume it to be symmetric as opposed to asymmetric

like the histogram over here denotes.

The difference between the mean and the median is not too much.

We did look at the median length of the reports versus over the years,

and a s you can see in 2007, the earnings calls were much longer

than the years after that,

and t hen we did see a slight bump in 2020 as well.

If you look at quarter wise length of the reports,

you'll also notice that Q4 generally has the longest reports

because the management is explaining the functions and the operations

of the company for the whole year and they're compiling results

from the previous three quarters as well.

In terms of the tickers which had the longest average length,

we had Boston Properties as reporting the longest average length,

at the average approximately close to 6,000 words,

which is double the average length of the whole data set in general.

Next we have is basically the word cloud.

Now, just to compare stemming option versus non stemming option,

on the screen, you see both the word cloud with the stemming option,

which is on the right hand side, and the one without the stemming option.

We preferred the no stemming option because it lets us see the words

which show up in most of these earnings calls more often than the other words,

whereas the stemming option might end up with a word cloud

which is not very explanatory.

As you can see, growth, new, revenue, increase.

These are the words which pop up the most,

which basically signal that managers are mostly optimistic and positive

in their tones in their prepared remarks section of their reports.

Then I also have a screenshot of the Sentiment Analysis platform

which basically again tells you the distribution of the overall tone,

positive or negative.

A s you can see from this histogram over here,

the overall tone of these prepared remarks is mostly very positive,

with only very few earnings call transcript which fall

in the negative sentiment portion of this distribution,

which again signals that managers tend to be more positive and more optimistic

when talking about the operations of the company so that they can signal

the fact that the future is going to be bright and it's going to be better,

and that definitely affects how the investors react to this tone.

Next, we also decided to look at the overall sentiment of the calls,

as well as the positive mean and the negative mean of the sentiments.

As you can see the positive sentiment, mostly it's around a value of 60,

whereas the negative sentiment,

we see a bump around the negative sentiment of minus 40

so none of these earnings calls were too negative,

even if the company performance was really bad for that particular quarter.

B ecause they want to signal a brighter future,

and not focus too much on the history.

If you look at the overall sentiment versus the years,

you'll notice that the overall sentiment

was much lower during the financial crisis of 2007- 2008,

and then it bumped hugely in 2009- 2010 and overall, it has been relatively steady

except for 2020 when the pandemic hit.

If you break it down quarter wise, you can see that the bottom center graph

over here shows that some quarters, specifically the fourth quarter,

might see a drop in the tone of the overall sentiment in the whole data.

If you look at the length versus the year, again, you'll notice that the length was

much higher in 2007, it dropped in 2008, again picked up a peak in 2009,

and overall, the length has reduced over time until 2020 itself.

It might be a safe assumption to make that

when times are tough and the companies have more to explain,

the earnings call tend to become longer and the prepared remarks are longer.

However, if you look at length versus the overall sentiment,

you'll notice that there seems to be a slight positive relationship

between length and overall sentiment,

but it's definitely not a simple relationship like a linear upward trend.

Instead, the data is quite heteroscedastic.

Here I have a list of companies which showed up with the highest

overall sentiment over the years versus the companies which showed up

with the lowest overall sentiment over the years.

I also put the industries in which they belong

just as an interesting piece of information that we noticed.

For example, a lot of the positive sentiment calls

were the ones from the technology services area or financial services,

whereas the lowest sentiment was in the waste management

or medical technology industries.

In terms of the future research that we plan to do on this topic,

we want to examine the tone of these earnings call,

and do a cross analysis with variables

like managerial strategic incentives with disclosures,

the impact the tone has on analysts and investors,

as well as some variables which are specific to the firm,

such as the size, their complexity, their age, et c.

W e also plan to explore the term selection for building data mining models

using the Text Explorer platform within JMP Pro.

Thank you so much for attending this presentation and hopefully

we can answer any questions that you may have about our presentation today.

Thank you .