Hello, my replies below in red.

 

Regards,

Francesco

 

 

Da: Foss4smes-team <foss4smes-team-bounces@lists.fsfe.org> Per conto di Katerina Tsinari
Inviato: venerdì 21 giugno 2019 16:36
A: FOSS4SMEs mailing list <foss4smes-team@lists.fsfe.org>
Cc: Cosmas Vamvalis <vamvalis@abe.gr>
Oggetto: Re: [FOSS4SMEs-team] R: FOSS4SMEs_ Impact surveys

 

Dear all,

 

its nice to receive detailed feedback from SKUNI, OFE and Ifigeneia. I will now add my own feedback plus some old notes I have from Ifigeneia on this issue for consideration.

  • How can we track from which countries are the participants of the surveys? We need to prove certain numbers from each country. Good point  - I’ll add a question on the country of origin.
  • How can we stay in contact with the cooperating respondents to fulfil the before and after surveys with the reflection statement that is required? Is it better to send these per e-mail? We need guidelines.

I don’t understand such a statement, nor the request for guidelines. You explicitly asked me for an online survey we could easily send out to our targets. As it happened with the research in IO1, results will then be presented and evaluated as an aggregate. I can’t see how nor why we should build individual profile of respondents. Please clarify your remark and state your expectations on that.

  • Why did Franc. create the “participant survey” so as to serve the evaluation of the overall platform during piloting and testing? This is being done with O2/A4 and there are certain tools prepared by SKUNI for this purpose. Be careful not to mix those two different things.
    Because this is the way I thought a “participant survey” might look like. How can you survey participants on the (short-term) impact of a training platform? My idea is that you should ask them about design, features and contents. If you read the questions asked in this first matrix and those asked in the spreadsheet developed by SKUNI for O2/A4, you’ll notice that they do are different. And, in any case, my opinion is that we should have a smart approach while reading the proposal and look at the different sections as a whole, since there are horizontal sections that are not watertight compartments and actually do mix up and interact, like in this case.


    Once again, if you don’t agree with what already developed, can you please clarify what are your expectations here and share your idea on this participant survey?



  • If only 25 responses per survey are allowed in Limesurvey, we need to create the surveys for each country separately. This is one of the options, even if I would leave the door open for another solution that would allow us to manage 3 survey links instead of 18 (6x3).

  • Apart from the “2 Participant Impact surveys” there are another 8 tools we need to use as a project team. Which is the relation of this survey to the rest of the tools? Has Dlearn made sure that there are no overlapps with the other tools?
    Here again, I’m sorry, I don’t get this point. Can you please explain it with better words?

    As I said before, there sure are interactions between different headings of the project, because “Impact” is an horizontal section that covers different phases of a project implementation.
    This count of 9 tools is something you come up with as a means of simplification for internal management, but the application does not have this count of 9. In fact, you’ll see that some of the items you listed are already being implemented (i. e. stakeholders matrix, peer review process) and some of them can’t be listed as a separate “tool” (e.g., we agreed that the “number of ‘expressions of interest’/requests to use the FOSS4SMEs platform” should be included in the survey as an open question, so I can’t look at this as a single separate tool itself).

    Please clarify.


  • Dlearn should be careful not to mix these surveys with the surveys of Brian called “self-diagnostic tool” and “final participant evaluation survey”, which will be soon integrated in Moodle and are focused only on the training course and its lessons/knowledge acquired there. Dlearns surveys focus on the impact of all the results produced within this project. At least this is how I personally understand it.
    I’m sorry to read this point, since we discussed this during our last call only a few days ago and there was a general agreement on the fact that the “pre-diagnostic tool” would account for the “before”. No objections were made.

    In any case, I see that we have then another “final participant evaluation survey”, which I now suppose should be treated as a different thing. So my count now is:
    • Pre diagnostic tool;
    • Final participant evaluation survey;
    • Assessment tools (IO2/A4);
    • (9) Impact tools, including participant survey.

      My question is: how are we going to deal with all these measures and report on them? What is the plan? How do they precisely differ from one another?
      I think everyone would need a clarification on how to proceed.


  • Further, since we need f.e. “written evidence on concrete plans”, a “data metric document” etc., we need from Dlearn to clarify in which order and in which project-time we need to use each impact tool. This is to avoid confusion and questions along the finalisation of the project.
    All the surveys are to be sent after the course finalisation and release, asking participants to take it after completing the course. The project ends in October, so I don’t see a different answer from this, nor I could make a timeline if we still don’t know when the full platform will be available for the public.
    Once again, if you have a different view, please clarify.


  • Concerning the number of words in your "free text questions", Ifigeneia suggested to reduce it from 2000 to 500 if possible. The limit is 2.000 characters, not words.

She also suggested (older notes I kept):

- to ask the questions in a smart way, in order to get the answers, we need;

- to have less free text questions; there are no free text questions apart from the ones explicitly required from the application.

- to remember that we will need to use the results/information give inside the final report as a proof of our work;

- tool nr.4 “written evidence of concrete plans” should be developed as a question inside the “after survey”, so that we cover it this way; done

- tool nr.5 “expressions of interest” should be developed as a question inside the “after survey”, so that we cover it this way; done

- tool nr.8 “Peer Review through staff” concerning the partners, each of us should write 5 lines on where they will use the project results afterwards. The stakeholders can do it in our Multiplier event in Brussels.
Why? I mean, this is the same peer review process we have been implementing since the beginning of the project. To fulfil what required in this section of the application, I made another version of the survey addressed to ourselves as partners.
What the stakeholders have to do with the peer review process, which is exclusively internal? Be careful to not mix those things.
Such statements should be included in the “Exploitation Plan” instead.

- tool nr. 2 “performance data metric document before and after”, which is for SMEs, she suggested to develop it as a questionnaire in limesurvey. Either extra or inside the “Participant Impact surveys”. done

 

Concerning the numbers, the proposal gives numbers also for other categories of impact measurement or target group. They are just not in these 2 pages and one has to look for them elsewere. Ifigeneia and me have long thought and discussed these numbers and have already suggested to Dlearn the following numbers per partner:

Tool nr. 1: 15

Tool nr. 2: 15

Tool nr. 3: 5

Tool nr. 4: as many as possible

Tool nr. 5: as many as possible

Tool nr. 6: 40

Tool nr. 7: 30+

 

Since the rest of the team partners don’t know ATLs discussion with Dlearn on this so far, here I explain the numbering:

Tool nr. 1: Participant survey of 15 SMEs and VET trainers (before and after) including a reflection statement

Tool nr. 2: Completion of performance data/metric document by 15 SMEs (before and after) 

Tool nr. 3: A set of 5 SME case studies from each partner (total 30) showcasing the participants experience and improved performance

Tool nr. 4: Written evidence of concrete plans and/or actual examples of new SMEs and VET trainers using the FOSS4SMEs platform by participants

Tool nr. 5: Number of ‘expressions of interest’/requests to use the FOSS4SMEs platform by other SMEs, VET trainers, Stakeholders and partners.

Tool nr. 6: a regional/national database of stakeholders and key contacts (stakeholders matrix) with at least 40 regional and national stakeholders with responsibility for VET/SME policy and development.

Tool nr. 7: A formal consultation exercise involving 30+ national and European policy-makers/key stakeholders based on the policy recommendations (O3) .

Tool nr. 8: A peer review, for which partners will nominate a staff member not directly involved in pilot activities to complete a questionnaire to provide feedback and comments on the projects tangible and intangible outcomes.

Tool nr. 9: A persuasive business case of how to make strategic use of FOSS and the use of open educational resources within VET – Can be prepared inside the new chapter of the updated Quality Plan.
This is not a tool, nor a deliverable. Please read the whole paragraph of the application from where you isolated this sentence (p.61) and you’ll see that it’s the project as a whole that should be taken as a “persuasive business case”.

 

The coordinators suggestion – agreed with Dlearn- is to try to reach these numbers as good as we can. If we reach them, we can be optimistic to have a good evaluation during our Final Reporting period. Of course we can make a collective decision, after we receive feedback from TUD and FSFE (deadline was set for 21.06.).

 

Dear Francesco, when can we have the updated chapter inside the Quality plan and the tools ready? Is it possible by 28.06?

I can’t answer this question until we clarify all the points above and we get to a full and shared understanding of the whole picture.

 

Best,

Katerina

 

 

Στις Πέμ, 20 Ιουν 2019 στις 6:30 μ.μ., ο/η <francesco.agresta@dlearn.eu> έγραψε:

Hi Sivan,

you are right, we have no numerical targets for the other categories.

The idea to put a number on them came out from the discussion between me and Katerina, because we thought it would have been better for everyone to have a reference target in order to try and present the same amount of results across the different countries.

I tried to keep the numbers as low as possible to avoid overclaiming, given the time we have left. But, again, this is for us to be decided, so let’s hear everyone’s opinion and make a collective decision.

 

Best,

 

Descrizione: Descrizione: dlearn

 

Francesco Agresta

 

European Project Manager

European Digital Learning Network

Via Domenico Scarlatti, 30

20124 Milano

Mob.  +39 3496027623

Email francesco.agresta@dlearn.eu

www.dlearn.eu

 

 

Da: Sivan Pätsch <sivan@openforumeurope.org>
Inviato: giovedì 20 giugno 2019 16:17
A: francesco.agresta@dlearn.eu; 'FOSS4SMEs mailing list' <foss4smes-team@lists.fsfe.org>
Oggetto: Re: [FOSS4SMEs-team] Quality Plan v1.2

 

Hi Francesco,

 

Thanks for pointing to the update of the quality plan in the call on Tuesday.

 

I have reviewed chapter 7 on the impact according to our discussion on the call in regard to the number of representatives from the different target groups we want to receive input on the impact measurement.

 

I see that the application (p 62) points to five case studies per partner for the target group SMEs, but makes no prescription for any other category of impact measurement or target group. Is it therefore necessary to commit to a specific number for the other measurements and target groups? Would it be possible to make no binding commitment for the other categories or at least reduce the numbers significantly, as we have not promised five inputs for the other categories/target groups?

 

Best,

Sivan

 

Στις Πέμ, 20 Ιουν 2019 στις 5:16 μ.μ., ο/η <francesco.agresta@dlearn.eu> έγραψε:

Dear Ifigenia, Sivan and all,

to be honest I wasn’t aware of the SUS label, but the concept is right.

That set of questions come from the need to create a “participant survey” as required by the proposal, which could also serve to evaluate the overall platform during piloting and testing.

However, there are currently 15 questions in the matrix, so if everyone agrees I could cut them to 10 to make it simpler and adapt to the SUS methodology.

 

Another option might be to remove this matrix from the surveys addressed externally (i.e. SMEs and VET) and leave it only for our internal testing (i.e. the third survey dedicated to project partners).

 

Reduction of questions in the second matrix + indication of the unit number  -> yes, this could be easily adjusted too.

 

On a more practical side, I have just found out that unfortunately the free version of LimeSurvey allows us only 25 responses per survey created. I’m sorry, I wasn’t aware of that.

Do someone of you (maybe the partners more expert in the FOSS world) have any advice on other free and open source tool to create online surveys that could be fit for our purpose?

 

Thank you,

 

Descrizione: Descrizione: dlearn

 

Francesco Agresta

 

European Project Manager

European Digital Learning Network

Via Domenico Scarlatti, 30

20124 Milano

Mob.  +39 3496027623

Email francesco.agresta@dlearn.eu

www.dlearn.eu

 

 

 

 

Da: Ifigeneia Metaxa <metaxa@abe.gr>
Inviato: giovedì 20 giugno 2019 15:46
A: 'Sivan Pätsch' <sivan@openforumeurope.org>; 'Francesco Agresta' <francesco.agresta@dlearn.eu>; 'Jonas Gamalielsson' <jonas.gamalielsson@his.se>
Cc: foss4smes-team@lists.fsfe.org
Oggetto: RE: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys

 

Dear all,

 

If I get it right, these questions come from SUS (eg. https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html) and, thus, should be used as they are, if we want to derive a result on the usability, based on the scale of this methodology. The intention is for the same issue to be addressed/asked in both a “positive”  and “negative” way, in order to make sure that the user has a clear understanding and does not respond mechanically. Again, I am not that deep in the project, you know best.

 

Best regards,

Ifigeneia

 

From: Foss4smes-team [mailto:foss4smes-team-bounces@lists.fsfe.org] On Behalf Of Sivan Patsch
Sent: Thursday, June 20, 2019 4:32 PM
To: Francesco Agresta; Jonas Gamalielsson
Cc: foss4smes-team@lists.fsfe.org
Subject: Re: [FOSS4SMEs-team] FOSS4SMEs_ Impact surveys

 

Hi Francesco,

 

I agree with Jonas and Björn that the number of questions could become an issue to for the response rate. Specifically, in the matrix, there is some potential to remove questions that look at the same aspect from a different angle.

 

e.g.:

 

"The learning platform was easy to use" and "I found the learning platform unnecessarily complex"

-> a similar and could be one question

 

"I felt very confident using the platform"

-> not sure if the confidence of the reader is relevant for us

 

There are also some formatting issues where there seems automated hyphenation (line breaks within one word) happening which does not help ease of reading.

 

I don't know if it will be possible, but maybe you can also reduce the questions for the different units of the course? A help could also be to add the number of the unit so it's a bit clearer.

 

Best,

Sivan

 

On Tue, 2019-06-18 at 18:15 +0200, Francesco Agresta wrote:

Dear Jonas and Bjorn, 
thank you for your prompt and valuable feedback.
 
 
 
Il 18 giugno 2019 alle 17.33 Jonas Gamalielsson <
jonas.gamalielsson@his.se

 

> ha scritto:
 
We have checked the three surveys and notice that the text for questions 
are right justified, and we think they should be left justified. Several 
questions have formatting issues which may significantly inhibit 
respondents from filling in the survey.
 
I see, I didn't select a right justification so I guess that the Limesurvey system did it automatically. I have started to use it only recently, so I can try to fix it.
 
We don't understand how the questions on the survey page "FOSS4SMEs 
Impact" relate to impact.
 
In general, few questions actually address the purpose of the survey, as 
stated on the first page of the survey(s): "This survey has been 
developed to assess the impact of the FOSS4SMEs main outputs on 
participating SMEs.". Most questions are at the level of the individual 
experiences rather than at the organisational (SME) level.
I get your point, which is all the way reasonable. However, I believe that we have to keep in mind the scope of this project and our present status. Do you believe it would be possible to report on an impact at organisational level over the remaining implementation time, also given the fact that the course has still not be released? 
I can't see this happening right now, but please give me input if I'm wrong. This is why I thought the easiest way to get away with this task would be to keep an individual approach.
 
 
Further, we fear that the large number of questions may reduce the 
response rate.
 
I could take out some of the questions in the two matrixes, but I'm afraid the free-text questions at the end of the survey have to stay because they are specifically asked in the proposal.
 
We also find that there is a significant risk for low response rate, in 
particular for SMEs, when using an online survey tool. Hence, it would 
be appropriate to also provide an offline alternative (e.g. in the form 
of an ODS template that is provided to potential respondents so that 
respondents can fill it in, print it, and sent it back via post 
(landmail/airmail) to address privacy concerns).
I developed an online survey because we thought with Katerina it would have made easier the dissemination of the questionnaire and the collection/analysis of responses. However, if you think you'll need an .odt version of it for the SMEs, that's not going to be an issue.
 
Best, 
Francesco
 
 
On 2019-06-14 18:44, 
francesco.agresta@dlearn.eu

 

 wrote:
Dear partners,
 
I’m sending here the links for three impact surveys we are supposed to 
send out and have filled in during these final months until the end of 
the project.
 
They relate to three different target groups:
 
 1. SMEs
    
https://foss4smes.limequery.com/1?lang=en

 

 
 2. VET Centres, Trainers and Coaches7
 
https://foss4smes.limequery.com/2?lang=en

 

 
 
 3. Project partners (i.e. ourselves)
 
https://foss4smes.limequery.com/3?lang=en

 

 
 
The fourth target group to be surveyed will be “Other stakeholders” 
(e.g. policy makers in digital education). They will be part of a 
“formal consultation based on Intellectual Output 3”, which is still 
under development.
 
However, this fourth group will be most probably approached exploiting 
the occasion of the final conference in Brussels.
 
In addition, please find attached a template for the collection of SMEs 
case studies showcasing the participants experience and improved 
performance. We are supposed to collect 5 case studies per partner, 30 
in total.
 
These activities relate to the “Impact” strategy described at page 62-63 
of the proposal.
 
I have started updating the Quality Plan accordingly with all the 
necessary information (you will find it in keybase), and it will be 
finalised as soon as we are done also with the 4^th target group and the 
self-diagnostic tool (which is supposed to depict the “before” situation 
about participants).
 
Please have a look at the surveys and we will discuss them during our 
monthly call coming next Tuesday.
 
Wish you a nice weekend,
 
Descrizione: Descrizione: dlearn <
http://www.dlearn.eu/

 

 
Francesco Agresta
 
European Project Manager
 
European Digital Learning Network
 
Via Domenico Scarlatti, 30
 
20124 Milano
 
Mob. +39 3496027623
 
Email 
francesco.agresta@dlearn.eu

 

 <mailto:
francesco.agresta@dlearn.eu

 

 
www.dlearn.eu

 

 <
http://www.dlearn.eu/

 

 
 
_______________________________________________
Foss4smes-team mailing list
Foss4smes-team@lists.fsfe.org

 

 
https://lists.fsfe.org/mailman/listinfo/foss4smes-team

 

 
 
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct

 

 
 
 
--
_______________________________________________
Foss4smes-team mailing list
Foss4smes-team@lists.fsfe.org

 

 
https://lists.fsfe.org/mailman/listinfo/foss4smes-team

 

 
 
This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct

 

--

Sivan Pätsch

Digital Policy Adviser

OpenForum Europe

tel +32 (0) 2 486 4151

mob +32 (0) 484 90 71 23

web http://www.openforumeurope.org
Follow us on Twitter @OpenForumEurope
--
OFE Limited, a private company with liability limited by guarantee
Registered in England and Wales with number 05493935
Registered office: Claremont House, 1 Blunt Road, South Croydon, Surrey CR2 7PA, UK

 


Avast logo

This email has been checked for viruses by Avast antivirus software.
www.avast.com

 

_______________________________________________
Foss4smes-team mailing list
Foss4smes-team@lists.fsfe.org
https://lists.fsfe.org/mailman/listinfo/foss4smes-team

This mailing list is covered by the FSFE's Code of Conduct. All
participants are kindly asked to be excellent to each other:
https://fsfe.org/about/codeofconduct



--

 

 

atlantis-logo-for-signatures

Katerina Tsinari

EU Projects consultant

Αntoni Tritsi 21, 570 01 Thessaloniki

T:

2310 233 266

Email:

tsinari@abe.gr

URL:

www.abe.gr

Skype:

 tsinarikaterina@hotmail.de