After the Bazaar - what we learned
The adventurous researchers who came along to the first ever Research Bazaar conference were test subjects in a large experiement. What happens when you take 190 researchers from a huge range of disciplines and institutions, 19 courses, three key-stories, a lot of felafel and mix for three days under one giant tent?
Being researchers ourselves, we love data, and we collected quite a bit over the course of the conference. Participants helped us out by filling out class surveys, giving comments on the bits they liked best and what they would change (see, quantitative and qualitative data!) and, of course, the many tweets and photographs that the conference generated. So, what did we learn?
Software carpentry is demanding… and surveys are polarising
We asked questions about the main workshops and the short courses that ran on Wednesday covering the pace of the workshops, the materials and teaching, and how useful the workshops had been. The questions were scored on a scale of one to five, where one meant “no way!” and five meant “totally!”
Overall, the mean response to the statement “The pace of the class was comfortable” was 3.6. MatLab and Python scored highest in this category (both 4.1) while R and NLTK (both 3.1) had pushed participants the hardest. A few people suggested streaming the classes in future, to accommodate more advanced and novice programmers, which would certainly be an option if we had enough instructors.
Before the main event, a core of instructors had spent three days with Bill Mills, learning about effective teaching strategies. At the Research Bazaar, one of our pledges is that our training will never be boring. It looks like the hard work paid off, as we scored an average 4.1 in response to “the teaching was clear” an 4.3 for “the teaching was engaging.” Python scored highest for clarity of teaching, at 4.5 (which is no surprise, when one participant said Damien was the best thing about the whole conference) while CAD, Python and Maps all scored 4.6 for their engaging teaching, which is a huge compliment to these teams! We also scored a mean 4.7 in response to the statement “the teaching materials were of high quality.” CAD scored a mean of 4.6 in this category, with a minimum score of 4, which reflects the incredibly hard work that Aliza put into developing the resources for this stream.

The CAD class hard at work
The point of learning a new research tool is, of course, to use it once you’re back at work. We scored an average of 4 against the statement “The course gave me a good foundation” and 3.9 against “I feel confident to keep working on my own”. Python received the highest score in this category, with a mean of 4.4.
We know forms can be frustrating, so we also asked participants whether they like filling in surveys. This was the most polarising question, with an average of 2.9 and first quartile of 2.0, it seems most ResBaz-ers don’t like surveys much, although one enthusiast not only ticked 5 but added a smiley face and the comment “Really!”. Perhaps because there were more social scientists in the room, surveys were most popular with NLTK, with an average response of 3.2, while there was the most survey rage in the Python class, where the average response was just 2.6.
Social media is fun but Big Data is controversial
As well as the main courses, we offered electives in tools such as Aurin, Omeka and Authorea as well as skills like Social Media for research and 3D printing.
We were joined at Research Bazaar by Alberto Pepe and Nathan Jenkins, co-founders of Authorea, who taught an introduction to their tool. This workshop scored highly in relevance to research, so we hope to see a batch of collaborative papers come out of ResBaz written on Authorea.
Our social media guru Dejan joined with Katie Mack, better known on Twitter as @AstroKatie, to share secrets of social media. The enthusiasm of the presenters was reflected in high agreement with the statement “I found the teaching was engaging” (mean 4.6; range 3:5) and “I will use what I learned in my research” (mean 4.5, range 3:5).

ResBaz TV host Dejan shared his knowledge of social media
The most polarising of our elective was “Critical approaches to Big Data”. It attracted such high interest that we ran it twice, and Daniel’s passion for the topic was clearly reflected in the high scores he got for his engaging teaching (mean 4.3, range 3:5). It seems though, that some people were a bit surprised by the content (he scored the full range of one to five for “The content is relevant to my work”) and two attendees commented that they hadn’t expected that the session would be from a social science perspective!
Getting the information balance right is hard and not everyone can sit on the ground
Following Research Bazaar, we circulated a survey on the event itself to find out what participants thought of the event itself and how it had run. We used the same scale of one to five for the
It seems we did alright helping you prepere for ResBaz, with an mean response of 4.5 (range 3:5) to the statement “I was given enough information to prepare for ResBaz”. We also scored well on helping you access wifi and class materials, with an average score of 4.6 (range 2:5) against the statement “I was given appropriate infomration on accessing wifi and class materials”. It seems we could have done better explaining the schedule, however, as we scored 4 (range 1:5) for “The program was clear and easy to follow”. A couple of people suggested more maps to navigate the campus and a few got lost in the flood of emails. All noted for next time!
Overall, the range of electives was well-received, with a mean of 4.6 (range 4:5) in response to “The range of electives was sufficient”. Suggestions for next time included a greater range of R materials (both novice and advanced), LaTeX, more databases, and image editing. We also got a couple of reminders that research technology isn’t solely the preserve of scientists and not to forget about the humanities and social science researchers among us!
We were worried we’d wear you out, but everyone responded with either a four or five to “The pace of the event was interesting without being exhausting”. It seems we could have done better advertising the chillout rooms, as a lot of people said they hadn’t known about them. We also got positive responses to “The length of the conference was about right” (mean 4.5, range 3:5) and “The length of each day as about right” (mean 4.4, range 2:5).
We put a lot of thought into making ResBaz safe, friendly and accessibile. It was great to see this pay off in the survey results. We scored a mean of 4.5 (range 3:5) against “The accessibility of the vene met my needs”. A few people pointed out that we could have done better providing seating for people who can’t sit comfortably on the ground for long periods and firm parking for wheelchairs. More water stations would also have helped. A couple of respondents said childcare would have been useful to them, so we’re definitley looking into that for next time.

Morning yoga helped provide balance
Hearteningly, we scored really well against “I had fun at ResBaz” (mean 4.8, range 3:5), I felt welcome at ResBaz (mean 4.8, range 4:5) and “I felt safe at ResBaz” (mean 4.8, range 4:5). We also scored 4.9 (range 4:5) for “I had confidence in the Code of Conduct” and “I felt staff would take my concerns seriously”.
And would participants recommend ResBaz? It seems they would, with an average response of 4.3 to “I would recommend ResBaz to an honours student who is intending to do a research degree” and 4.5 to “I would recommend ResBaz to my peers” and “I would recommend to my dpeartment heads/ research coordinators that they shoudl offer ResBaz style training”. The range on all these questions was one to five. Interestingly, the individual who responded “no way” (one) to recommending ResBaz to honours students said they would “totally” (five) recommend it to their peers and department. Conversely, the individuals who said “no way” to recommending ResBaz to their peers and deparments said they would “totally” recommend it to honours students.
We also asked about the best bits, and things participants would change. The responses to these questions were as varied as our attendees, but it came through clearly that they loved the food, the keystories and “the vibe”. Suggestions for next time included slightly longer sessions or having everything closer together so we didn’t lose so much time each day, having fewer electives run in parallel to make it possible to go to more classes, and more time for the posters.
