Thursday, November 19, 2009

Reviews as group-forming.

I chose to do my project on Urban Outfitters--specifically, their reviewing feature. I chose this mostly because I like Urban Outfitters and spend too much time looking at items anyway. I also chose it because I would never have thought of reviews as group-forming before taking this course, so it seemed to fit well as the last project for this course.

Urban Outfitters is a store (both online and tangibly across several countries) which sells women and men's apparel and accessories, as well as housewares. Several years ago, a review feature was implemented, so that people who purchased the item could give their response to potential buyers. A question-and-answer section was started within the past year as well, to combat users who were asking questions in the review section. A tagging section was also recently added, where users could describe an item as "cute" or "80's" or whatever else.

For my project, I coded 108 reviews that had received a vote for “Was this helpful to you? Yes/No.” These were all from the non-sale dresses section.

I coded the amount of up-votes versus overall votes, the person’s username, the number of words in their review, if they were a top contributor, the three separate ratings they gave the product (overall, fit, and look).

Then, I coded for 11 individual variables present in their review.I coded for the following within the review:

1. Tells a story, ex. “When I saw this...,”

  • “As soon as I opened the box...”
  • “I read a lot of positive reviews...”

2. Sizing, ex. “I’m 5’4” 110 pounds”

  • “I’m curvy and...”
  • Or mention of what size they purchased

3. Quality, ex. “The material felt cheap.”
  • “The zipper broke as soon as I unzipped it.”

4. Price, ex. “This was worth every penny.”

  • “Great buy.”
  • “This was not worth what I paid for it.”
5. Wearability, ex. “This dress is too short.”
  • “I didn’t have to wear a bra with this, yay!”
  • “The dress rode up.”
  • “The bottom is see-through.”
6. Fit, ex. “This hugged in all the right places.”
  • “The medium was too loose.”
7. Comfort, ex. “This was tight across...”
  • “This was comfortable.”
8. Appearance, ex. “Cute!”
  • “This dress is so pretty.”
9. Direct recommendations, ex. “You should buy this!”
  • “I would wear this with tights.”
10. Indirect recommendations, ex. “This was obviously made for skinny girls.”
  • “I paired this with tights, and it was perfect.”
11. I also coded for responding to other reviewers, but I ended up not including this in the ten variables. However, examples were, “I agree with...,” “I didn’t find the material to be cheap.” I ended up not using this data, however.

After coding for all of this data, I then added up the number of conditions met by each reviewer. The possible number was 10. The highest reached by a reviewer was 9. The lowest was o (However, this was only one review--which was less of a review and more of a “UO, Get more in yellow!”). The average was about 4.79.

I then found the percentage of a review’s positive feedback, for simplicity reasons. The vast majority were 100% positive, but the average was 86.74%.

(click for a larger image)

(Professor Welser suggested that I redo this chart so that you can see concentrations of points--which is a great idea! That would give me a better idea of what I'm actually seeing in my findings, because currently, it doesn't look like much.)

In conclusion, although there seemed to be a small shift towards more up-votes if the reviewer met more conditions (ie offered a wider range of information), this was pretty miniscule (a difference of only one condition).

So, there is not enough evidence to suggest that offering a more complete range of information makes readers more likely to up-vote you. It was also difficult to collect this information because the majority of reviews received only one vote. And, for about every 5 reviews, only one would be rated at all. The reviews that received 6 or more votes were very out-of-the-ordinary.

In the future...
  • I’d love to play around with the data I’ve collected so far--perhaps find the “necessary conditions” that, on average, must be met to receive up-votes.
  • I’d also like to explore the reviews that received more than 1 vote, because they didn’t seem to follow any sort of pattern.
  • I was able to qualitatively observe a lot of interactions during this process--for example, although there were “top contributors,” their reviews were frequently useless, like, “This dress in white is so cute!” Urban Outfitters encouraged these kinds of reviews by offering top reviewers occasional discounts, but being a “top contributor” depended solely on how many reviews a person added, not on the quality of them. Instead of looking to these reviewers as helpful experts, I think that many readers saw them as annoyances.
So, what do you think? Comments? Suggestions?

No comments:

Post a Comment