Title:
CONTENT DEVELOPMENT AND MODERATION FLOW FOR E-LEARNING DATAGRAPH STRUCTURES
Kind Code:
A1


Abstract:
Embodiments relate to authoring, consuming, and exploiting dynamically adaptive e-learning courses created using novel, embedded datagraph structures, including course macrostructures with embedded lesson microstructures and practice microstructures. For example, courses can be defined by nodes and edges of directed graph macrostructures, in which each node includes one or more directed graph microstructures for defining lesson and practice step objects of the courses. The content and attributes of the nodes and edges can adaptively manifest higher level course flow relationships and lower level lesson and practice flow relationships. Embodiments can exploit such embedded datagraph structures to facilitate dynamic course creation and increased course adaptability; improved measurement of student knowledge acquisition and retention, and of student and teacher performance; enhanced monitoring and responsiveness to student feedback; and access to, exploitation of, measurement of, and/or valuation of respective contributions; etc.



Inventors:
Zaslavsky, Guy (Avihail, IL)
Bobkov, Andrei (Krasnodar, RU)
Application Number:
14/630536
Publication Date:
08/27/2015
Filing Date:
02/24/2015
Assignee:
MINDOJO LTD.
Primary Class:
International Classes:
G06Q50/20; G09B5/00
View Patent Images:



Primary Examiner:
MCATEE, PATRICK
Attorney, Agent or Firm:
Marsh Fischmann & Breyfogle LLP (Lakewood, CO, US)
Claims:
What is claimed is:

1. A system for content development and moderation in an e-learning datagraph structure, the system comprising: a non-transient course data store that stores a plurality of content items defined in context of a plurality of knowledge entities instantiated as nodes of a course datagraph macrostructure in the course data store, wherein each knowledge entity is linked with at least one other knowledge entity in the course datagraph macrostructure by a respective knowledge edge having a respective set of knowledge edge attributes that defines a course flow relationship between the knowledge entities, and each knowledge entity has at least one datagraph microstructure embedded therein, each datagraph microstructure implemented as a directed graph structure; a course backend processor that is in communication with the course data store and operates to: monitor an effectiveness score of each of the plurality of content items to detect when an effectiveness of any of the plurality of content items falls below a predetermined acceptance level; determine, automatically in response to detecting that the effectiveness of one of the plurality of content items is below the predetermined acceptance level, a cause of the detecting; and formulate a remediation recommendation automatically according to the determined cause, the remediation recommendation indicating the one of the plurality of content items.

2. The system of claim 1, wherein the course backend processor operates to monitor the effectiveness score as a function of responses provided by students via respective processor-implemented course consumption platforms during consumption by the students of the plurality of content items.

3. The system of claim 1, wherein the course backend processor further operates to implement a workflow manager that maintains a present resource availability of a plurality of contributor resources, each contributor resource associated with one of a set of predefined contributor roles.

4. The system of claim 3, wherein the course backend processor further operates to: communicate the remediation recommendation to a first contributor resource associated with a director role via the workflow manager; and receive an instruction at the workflow manager from the contributor resource instructing the workflow manager to open a task in the workflow manager and to assign the task to a second contributor resource associate with an author role.

5. The system of claim 4, wherein: the instruction to assign the task to the second contributor is according to the present resource availability and the role of the second contributor.

6. The system of claim 4, wherein the course backend processor further operates to: receive a remedial action from the second contributor via a processor-implemented course authoring platform of the second contributor in response to the assigned task.

7. The system of claim 3, wherein: the workflow manager assigns each of the plurality of content items to a present phase of a master workflow comprising a content development phase, a content release phase, and a reactive content moderation phase; the reactive content moderation phase being subsequent to the content release phase in the master workflow; and the course backend processor operates to monitor the effectiveness score of each of the plurality of content items during the reactive content moderation phase.

8. The system of claim 3, wherein the course backend processor further operates to: moderate contributor resource quality by dynamically computing a contributor reputation score for each contributor resource at least partly as a function of the detecting when the effectiveness of any of the plurality of content items falls below a predetermined acceptance level, for those detected content items that are contributed by the contributor resource.

9. The system of claim 1, wherein each datagraph microstructure is one of: a lesson datagraph microstructure having a plurality of lesson step objects, each lesson step object linked with at least another of the lesson step objects in the lesson datagraph microstructure by a respective lesson edge that defines a lesson flow relationship between the lesson step objects in the lesson datagraph microstructure; or a practice datagraph microstructure having a plurality of practice step objects, each practice step object linked with at least another of the practice step objects in the practice datagraph microstructure by a respective practice edge that defines a practice flow relationship between the practice step objects in the practice datagraph microstructure.

10. A method for content development and moderation in an e-learning datagraph structure, the method comprising: monitoring an effectiveness score of each of a plurality of content items to detect when an effectiveness of any of the plurality of content items falls below a predetermined acceptance level, the plurality of content items defined in context of a plurality of knowledge entities instantiated as nodes of a course datagraph macrostructure stored in a non-transient course data store, wherein each knowledge entity is linked with at least one other knowledge entity in the course datagraph macrostructure by a respective knowledge edge having a respective set of knowledge edge attributes that defines a course flow relationship between the knowledge entities, and each knowledge entity has at least one datagraph microstructure embedded therein, each datagraph microstructure implemented as a directed graph structure; determining, automatically in response to detecting that the effectiveness of one of the plurality of content items is below the predetermined acceptance level, a cause of the detecting; and formulating a remediation recommendation automatically according to the determined cause, the remediation recommendation indicating the one of the plurality of content items.

11. The method of claim 10, wherein the monitoring is performed as a function of responses provided by students via respective processor-implemented course consumption platforms during consumption by the students of the plurality of content items.

12. The method of claim 10, further comprising: implementing a workflow manager that maintains a present resource availability of a plurality of contributor resources, each contributor resource associated with one of a set of predefined contributor roles.

13. The method of claim 12, further comprising: communicating the remediation recommendation to a first contributor resource associated with a director role via the workflow manager; and receiving an instruction at the workflow manager from the contributor resource instructing the workflow manager to open a task in the workflow manager and to assign the task to a second contributor resource associate with an author role.

14. The method of claim 13, wherein: the instruction to assign the task to the second contributor is according to the present resource availability and the role of the second contributor.

15. The method of claim 13, further comprising: receiving a remedial action from the second contributor via a processor-implemented course authoring platform of the second contributor in response to the assigned task.

16. The method of claim 12, further comprising: assigning each of the plurality of content items to a present phase of a master workflow comprising a content development phase, a content release phase, and a reactive content moderation phase, wherein the reactive content moderation phase being subsequent to the content release phase in the master workflow, and wherein the monitoring is performed during the reactive content moderation phase.

17. The method of claim 12, further comprising: moderating contributor resource quality by dynamically computing a contributor reputation score for each contributor resource at least partly as a function of the detecting when the effectiveness of any of the plurality of content items falls below a predetermined acceptance level, for those detected content items that are contributed by the contributor resource.

18. The method of claim 10, wherein each datagraph microstructure is one of: a lesson datagraph microstructure having a plurality of lesson step objects, each lesson step object linked with at least another of the lesson step objects in the lesson datagraph microstructure by a respective lesson edge that defines a lesson flow relationship between the lesson step objects in the lesson datagraph microstructure; or a practice datagraph microstructure having a plurality of practice step objects, each practice step object linked with at least another of the practice step objects in the practice datagraph microstructure by a respective practice edge that defines a practice flow relationship between the practice step objects in the practice datagraph microstructure.

Description:

BACKGROUND

Embodiments relate generally to e-learning systems, and, more particularly, to computer-implemented creation and delivery of adaptive, interactive e-learning courses.

For many years, traditional classrooms have included course materials; teachers to interpret, adapt, and deliver the course materials; and students to learn from the teachers and the course materials. The effective transfer and retention of knowledge in such environments can be improved through increased student engagement and interaction with teachers and course materials, and through increased adaptation by the teachers to the needs and learning styles of the students. However, many pedagogical efforts are frustrated by limitations of traditional classroom environments. For example, it may be difficult or impossible to physically locate students in classrooms with skilled teachers; it may be difficult or impossible for a single teacher to concurrently engage with and adapt to multiple students, particularly when those students have different backgrounds, levels of knowledge, learning styles, etc.; it may be difficult to accurately, or even adequately, measure student knowledge acquisition and retention, or for teachers to adapt their teaching to implicit or explicit student feedback; it may be difficult to dynamically adapt course materials in context of static course materials (e.g., printed textbooks); it may be difficult to measure and respond to teacher or student performance across large (e.g., geographically distributed) populations; it may be difficult to measure or value respective contributions to learning by multiple teachers; etc.

With the increasing ubiquity of computers and Internet access, many attempts have been made to create effective, on-line learning environments. In most instances, these attempts are primarily an on-line implementation of a traditional classroom environment. For example, typical e-learning systems include digital versions of traditional course materials (e.g., digital text and images that mimic those of a traditional, printed textbook), digital self-assessment tools (e.g., digital flash cards, quizzes, etc.), and simple tracking (e.g., quiz scoring, tracking of which lessons have been completed, timers to track time spent, etc.). Some, more recent e-learning systems have added more sophisticated functions. For example, newer digital course materials can include hyperlinks, videos, etc.; and some on-line courses include communications functions to permit live chatting with instructors and/or other students, file access and sharing, etc. A few e-learning systems have recently begun to provide limited types of adaptation. For example, some e-learning courses can offer, or force, a review of a certain concept if a certain amount of time has elapsed since the concept was last presented to the student; or a student can be permitted to select from multiple alternative versions of a course, depending on skill level, prior knowledge, goals, etc. Even with the added capabilities facilitated by computers and the Internet, many of the limitations of traditional classrooms and pedagogical approaches frustrate the efficacy of e-learning systems.

BRIEF SUMMARY

Among other things, systems and methods are described for authoring, consuming, and exploiting dynamically adaptive e-learning courses created using novel, embedded datagraph structures, including course datagraph macrostructures with embedded lesson datagraph microstructures and practice datagraph microstructures. For example, courses can be defined by nodes and edges of directed graph macrostructures, in which, each node includes one or more directed graph microstructures defined by their respective lesson and practice step objects of the courses. The content and attributes of the nodes and edges can adaptively manifest higher level course flow relationships and lower level lesson and practice flow relationships. Various embodiments can exploit such embedded datagraph structures to provide various features, such as facilitation of dynamic course creation and increased course adaptability; improved measurement of student knowledge acquisition and retention, and of student and teacher performance; enhanced monitoring and responsiveness to implicit and explicit student feedback; and access to, exploitation of, measurement of, and/or valuation of respective contributions to learning by multiple teachers, including across multiple courses, disciplines, geographies, etc.; etc.

BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure is described in conjunction with the appended figures:

FIG. 1 shows an illustrative e-learning environment, according to various embodiments;

FIG. 2 shows an illustrative course datagraph macrostructure that includes a particular course flow relationship among its knowledge entities, as defined by knowledge edges between those knowledge entities;

FIG. 3 shows a portion of a course datagraph macrostructure that includes an illustrative lesson datagraph microstructure and an illustrative practice datagraph microstructure embedded in a knowledge entity, according to various embodiments;

FIG. 4 shows an illustrative screenshot of a course datagraph macrostructure editing environment, according to various embodiments;

FIG. 5 shows an illustrative screenshot of a lesson datagraph microstructure editing environment, according to various embodiments;

FIG. 6 shows a block diagram of an illustrative course consumption environment that includes a number of processor-implemented blocks for dynamically adapting course consumption to optimize a student's knowledge acquisition, according to various embodiments;

FIG. 7 shows an illustrative computational system for implementing one or more systems or components of systems, according to various embodiments;

FIG. 8 shows another illustrative computational system for implementing one or more systems or components of systems, according to various embodiments;

FIG. 9 shows a flow diagram of an illustrative method for self-constructing various types of content, according to various embodiments;

FIG. 10 shows a flow diagram of an illustrative method for self-constructing practice datagraph microstructures, according to various embodiments;

FIG. 11 shows a flow diagram of an illustrative method for self-constructing incorrect responses, according to various embodiments;

FIG. 12 shows an illustrative course datagraph macrostructure from which multiple courses are defined, according to various embodiments;

FIG. 13 shows a block diagram of an illustrative dynamic content valuation environment, according to various embodiments;

FIG. 14 shows a flow diagram of an illustrative method for dynamic pricing of e-learning content items, according to various embodiments;

FIG. 15 shows a flow diagram of an illustrative method for dynamic compensation of contributors to e-learning content items, according to various embodiments; and

FIG. 16 shows a flow diagram of an illustrative content development workflow 1600, according to various embodiments.

In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.

DETAILED DESCRIPTION

In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention may be practiced without these specific details. In some instances, circuits, structures, and techniques have not been shown in detail to avoid obscuring the present invention.

With the increasing ubiquity of computers and Internet access, many attempts have been made to create effective, on-line learning environments. For example, many traditional c-learning systems provide digital versions of traditional course materials, including digital versions of textbooks, some enhanced with videos, hyperlinks, integrated access to reference materials, on-line help, etc. Some traditional e-learning systems further provide self-practice and self-assessment capabilities, such as digital flashcards, timers, scored tests, and review questions. Some traditional e-learning systems also provide communications functions, such as on-line communities and social networking functions, live chatting with instructors and/or other students, etc. Still, in spite of added capabilities facilitated by computers and the Internet, most traditional e-learning approaches have not strayed for beyond the capabilities of traditional classroom environments. Accordingly, the efficacy of such approaches continues to be hindered by many of the same limitations present in traditional classroom environments. For example, on-line e-learning systems have typically been touted as a way to provide more students with more flexibility as to when and where they can learn (e.g., on-line access to courses can put students in front of teachers, regardless of their geographic separation). However, traditional on-line e-learning systems have done little to address issues, such as facilitating a single teacher's concurrently engagement with and adaptation to multiple students, particularly when those students have different backgrounds, levels of knowledge, learning styles, etc.; accurately, or even adequately, measuring student knowledge acquisition and retention, and facilitating teachers' adapting of their teaching to implicit or explicit student feedback, dynamically adapting course materials to different students and/or other course contexts; measuring and/or responding to teacher or student performance across large (e.g., geographically distributed) populations; measuring and/or valuing respective contributions to learning by multiple teachers; etc.

Among other things, embodiments relate to authoring, consuming, and exploiting dynamically adaptive e-learning courses created using novel, embedded datagraph structures, including course datagraph macrostructures with embedded lesson datagraph microstructures and practice datagraph microstructures. For example, courses can be defined by nodes and edges of directed graph macrostructures, in which, each node includes one or more directed graph microstructures for defining lesson and practice step objects of the courses. The content and attributes of the nodes and edges can adaptively manifest higher level course flow relationships and lower level lesson and practice flow relationships. Various embodiments can exploit such embedded datagraph structures to provide various features, such as facilitation of dynamic course creation and increased course adaptability; improved measurement of student knowledge acquisition and retention, and of student and teacher performance; enhanced monitoring and responsiveness to implicit and explicit student feedback; and access to, exploitation of, measurement of, and/or valuation of respective contributions to learning by multiple teachers, including across multiple courses, disciplines, geographies, etc.; etc.

For the sake of context, FIG. 1 shows an illustrative e-learning environment 100, according to various embodiments. As illustrated, the e-learning environment 100 can include course authoring platforms 160 and course consumption platforms 170 in communication with course data stores 140, for example, over one or more networks 150. Some embodiments further include a course backend processor 180 for performing any backend and/or otherwise centralized functions. For example, in some implementations, some or all datagraph functionality described herein is performed at the course backend processor 180, and the course authoring platforms 160 and course consumption platforms 170 implement thin clients, or the like, to facilitate interaction by course authors and students with the functionality of the course backend processor 180 (e.g., by providing graphical user interfaces, etc.). The course authoring platforms 160, course consumption platforms 170, and course backend processor 180 can be implemented as any suitable computational system, such as a server computer, desktop computer, laptop computer, tablet computer, smart phone, dedicated portable electronic device, etc. The networks 150 can include any suitable communications networks, including wired and/or wireless communications links, secured and/or unsecured communications links, public and/or private networks, mesh networks, peer-to-peer networks, hub and spoke networks, etc. The course data stores 140 can include any suitable types of storage, including one or more dedicated data servers, cloud storage, etc.

While some embodiments are designed to operate in context of typical computing environments, features are embodied in and facilitated by novel datagraph structures that transform the environment into an adaptive e-learning system. As illustrated, the course data stores 140 can store one or more course datagraph macrostructures 105. Each of the course datagraph macrostructures 105 can include one or more embedded course datagraph microstructures 130. Each datagraph structure includes datagraph nodes 110 linked together by datagraph edges 120. Each datagraph node 110 has node attributes 115, and each datagraph edge 120 has edge attributes 125.

As used herein, a “course” generally refers to a set of related “knowledge entities,” which can generally include any relatively high-level concepts of the course. Each knowledge entity can include embedded lesson datagraph microstructures, used herein to refer generally to objects that facilitate the acquisition of knowledge relating to particular topics or sub-concepts within the course; and embedded practice datagraph microstructures, used herein to refer generally to objects that facilitate the reinforcement, assessment, and/or refreshment of knowledge relating to particular topics or sub-concepts within the course. For the sake of illustration, a course could be designed to teach basic college writing skills. Each knowledge entity can correspond to a concept, such as “Fundamentals of Good Writing” or “Fundamentals of Critical Thinking and Rhetoric.” Within the “Fundamentals of Good Writing” knowledge entity, there may be a number of lesson datagraph microstructures directed, for example, to “Sentence Structure,” “Paragraph Structure,” or “Essay Organization.” There may also be a number of practice datagraph microstructures that may track the lesson datagraph microstructures, and may be directed, for example, to “Practice Sentence Structure”; or they may combine, split, reorganize, parse, or otherwise deviate from the lesson datagraph microstructures, such as “Practice Elements of Sentences” or “Practice Basic Organization in Writing.”

In typical embodiments, the course data stores 140 can include large numbers of datagraph nodes 110 created by multiple contributors in association with multiple courses and/or as stand-alone datagraph nodes 110 (e.g., a stand-alone concept, virtual textbook chapter, etc.). A course represents a specific subset of all these datagraph nodes 110, along with all their associated objects (e.g., datagraph edges 120, node attributes 115, edge attributes 125, etc.). Creation of a course, then, can include selection of datagraph nodes 110 and linking of those nodes by datagraph edges 120. In some instances, course creation can further include generation of additional datagraph nodes 110, generation (or adaptation, modification, etc.) of node attributes 115 and/or edge attributes 125, etc. Accordingly, some implementations permit a course to be contained within another course, multiple courses to intersect (i.e., share one or more datagraph nodes 110), etc. Further, proper formulation of the datagraph nodes 110 and datagraph edges 120 as course datagraph macrostructures 105 with embedded course datagraph microstructures 130 can facilitate automatic adaptation, and even generation, of courses for particular contexts. For example, consumption of a course by a student using a course consumption platform 170 can include compilation and consumption of the datagraph nodes 110 and their embedded course datagraph microstructures 130.

The node attributes 115 of the datagraph nodes 110 can be used to express any concrete or abstract aspect of the knowledge domain or of the process for acquiring the knowledge. For example, terms, like “concept” are used herein to generally include any aspects of a knowledge domain, such as factual data, lines of reasoning, learning preferences, proofs, principles, definitions, overviews, rules. “thinking skills” (e.g., how to approach an aspect), “performance skills” (e.g., how to apply an aspect), “work orders” (e.g., tools and techniques for handling multi-stage problems, such as solving a differential equation), contexts (e.g., representing groups of knowledge entities and/or sub-groups of knowledge entities), specific lessons, areas of interest (e.g. “sports”, “finance”, etc.), mental tendencies (e.g., a tendency to make careless arithmetic calculation mistakes of a certain type, to make certain grammatical errors of a certain type, to jump to certain false conclusions, to misinterpret certain types of data, etc.), misconceptions (e.g., based on societal prejudices, common misunderstandings, perpetuated falsehoods, etc.), thinking styles (e.g. visual thinking or learning, auditory thinking or learning, embodied thinking or learning, etc.), and/or any other aspect of a knowledge domain. Different types of knowledge domains lend themselves to different types of aspects. For example, a course relating to automotive repair will likely rely more heavily on embodied learning, image-based (or video-based) content, step-by-step processes, diagnostic algorithms, etc.; while a course on French literature will likely rely more heavily on visual learning, textual content, critical reasoning, etc.

In some embodiments, the node attributes 115 of the datagraph nodes 110 can include various types of metadata. Some metadata in the node attributes 115 can be used to describe the respective datagraph node 110. For example, the metadata can include a title of the datagraph node 110 (e.g., a long form, a short form, etc. for different contexts), a summary (as described below), a creator identifier (e.g., the name or other identifier of the instructor who created the node), etc. Other metadata in the node attributes 115 can be used to automatically adapt the datagraph node 110 to particular contexts. For example, the metadata can include default and/or adaptive assumptions for types of student profiles, such as certain alternative forms of content of the datagraph node 110 that can be generated based on different styles of learning, different platform capabilities (e.g., different hardware and/or software of the course consumption platform 170), different knowledge levels of particular students (e.g., whether the concept(s) included in the datagraph node 110 are considered new knowledge, a review of prior knowledge, a preview of future knowledge, etc.), different student context (e.g., student age, geography, affiliation, gender, socioeconomics, etc.), etc.

In some embodiments, the node attributes 115 can include a “summary.” The summary can be used to automatically generate summaries of the aspects of the respective datagraph nodes 110. In certain implementations, the summary in the node attributes 115 is used to automatically summarize the aspects of a course or lesson at the end of a particular knowledge flow. In other implementations, the summary in the node attributes 115 is used to auto-populate a global or personalized summaries list. For example, the summary can include one or more sets of text populated by a course creator to summarize key concepts covered in a course or a set of lessons. Upon reaching a trigger event (e.g., at the end of a particular lesson or course, at a point in a lesson or course flow specified by the course creator or instructor, when requested by a student (e.g., by pressing a “summarize” button or based on student preferences), after some learning time has elapsed, after some number of concepts has been consumed and/or practiced, after some amount of time has elapsed since last interacting with a particular concept, and/or at any other suitable time), a summary can be generated from the relevant node attributes 115. The summary can include any suitable information, depending on the type of summary and/or when it is being generated. For example, the summary can include a collection of one or more sets of text stored in the node attributes 115 of the group of datagraph nodes 110 being summarized; and/or the summary can include additional monitoring information, such as indications of when a student first consumed a particular concept, when the student apparently understood the concept, when the student last reviewed the concept, how long the student spent learning and/or reviewing the concept, what other concepts are related to a learned concept (e.g., as prerequisites, as similar concepts, as next concepts, etc.), where the student's present knowledge fits into an overall course or learning plan, etc.

As described above, the datagraph nodes 110 are linked by datagraph edges 120, each having edge attributes 125. The datagraph edges 120 can effectively define a “flow relationship” (i.e., an ontological relationship) through a set of datagraph nodes 110. Some portions of the flow relationship can be defined as static. For example, in a static portion of a flow relationship, the flow between certain datagraph nodes 110 is always the same, regardless of the context of the datagraph nodes 110 in a course, lesson plan, etc.; regardless of the type of student; regardless of the student's interaction with the datagraph nodes 110 and/or other datagraph nodes 110; etc. Other portions of the flow relationship can be defined as dynamic. For example, in a dynamic portion of a flow relationship, the flow between certain datagraph nodes 110 is dependent on the context of the datagraph nodes 110 in a course, lesson plan, etc.; dependent on the type of student; dependent on the student's interaction with the datagraph nodes 110 and/or other datagraph nodes 110; etc. In many instances, portions of the flow relationship can be defined as “limited dynamic,” or the like. For example, in a limited dynamic portion of a flow relationship, the flow between certain datagraph nodes 110 can be mostly static, except for limited variances that can be triggered by special student context (e.g., falling outside a large predetermined “normal” range, etc.), special learning context (e.g., a datagraph node 110 for a knowledge entity is being consumed outside its originally intended course context, etc.), and/or in other particular cases.

To define such flow relationships, the edge attributes 125 of each datagraph edge 120 can include a type, a source, and a destination. For example, as shown in FIG. 1, the source of datagraph edge 120ab is datagraph node 110a, and the destination of datagraph edge 120ab is datagraph node 110b. Its edge attributes 125ab can define the type of datagraph edge 120ab in a manner that indicates a relationship in the course datagraph macrostructure 105 between datagraph node 110a and datagraph node 110b. Even though each particular datagraph edge 120 may define relationships that do not, themselves define a flow between their source and target nodes, the datagraph edges 120 are still considered as defining a flow relationship at a higher level. For example, certain relationships defined by a datagraph edge 120 (e.g., a variant or special/general case relationship) may define, not how consumption flows from its source to its destination node, but rather which of multiple nodes will be presented to a student during consumption of the datagraph; such that the datagraph edge 120 still helps to define the overall flow relationship among the nodes.

FIG. 2 shows an illustrative course datagraph macrostructure 200 that includes a particular course flow relationship among its knowledge entities 210, as defined by knowledge edges 215 between those knowledge entities 210. The course datagraph macrostructure 200, the knowledge entities 210, and the knowledge edges 215 can be implementations of the course datagraph macrostructure 105, the datagraph nodes 110, and the datagraph edges 120 of FIG. 1, respectively. Each knowledge edge 215 is represented as an arrow to indicate its source and destination (i.e., the tail and head of the arrow, respectively), and each arrow is labeled to indicate one of an illustrative set of edge types.

One edge type is a “prerequisite” knowledge edge 215 (labeled as “P”), indicating that the creator of the course flow relationship has required students to consume the source knowledge entity 210 prior to consuming the destination knowledge entity 210 is that knowledge of Entity A is needed to start learning Entity B. In some implementations, the creator of the course flow relationship can define alternative forms of the prerequisite. For example, when the student reaches knowledge entity 210a, its node attributes can indicate that knowledge entity 210a can be considered consumed as long as the student has either consumed knowledge entity 210a or has completed some alternative indication of knowledge of that concept (e.g., previously consumed an alternate knowledge entity 210, correctly answered related questions in a pretest, has authorization to skip the knowledge entity 210 from an instructor, has explicitly indicated prior knowledge of the content of the knowledge entity 210 via a prompt, etc.).

Another edge type is a “component” knowledge edge 215 (labeled as “C”), indicating that the creator of the course flow relationship considers a set of destination knowledge entities 210 as a group of co-concepts falling within a macro-concept of a source knowledge entity 210 (e.g., as a bucket, macro-object, etc.). For a set of component knowledge edges 215 sharing a common source, there may not be a flow (e.g., order) defined between their destination objects, or the flow may be defined by the source (“parent”) object. For example, it may not be relevant which of the destination knowledge entities 210 linked by component knowledge edges 215 is consumed first or last, but it may be important that some combination of those knowledge entities 210 is consumed as a prerequisite for consuming a next knowledge entity 210. As illustrated in FIG. 2, knowledge entity 210c can be considered as a container object for knowledge entity 210d1, knowledge entity 210d2, and knowledge entity 210d3; and some or all of those knowledge entities 210d (e.g., effectively, some defined completion of the “container” knowledge entity 210c) can be a prerequisite of knowledge entity 210e. In some implementations, the component type of knowledge edge 215 can be used to form complex prerequisites. The complex prerequisites can be defined in any expression form that can be logically parsed, such as using Boolean operators (e.g., “AND.” “OR,” “∩,” etc.), mathematical operators (e.g., “+,”), natural language expressions (e.g., “both x and y, or some combination of x or y with z”), predefined expression formats (e.g., “AND(z,OR(x,y))”), etc. For example, as illustrated, the prerequisite of knowledge entity 210e is labeled as “(d1∩d2)∪d3,” indicating that the creator of the course flow relationship desires or requires that the student first learn either all the concepts of knowledge entity 210d1 and knowledge entity 210d2, or all the concepts of knowledge entity 210d3.

Another edge type is a “variant” knowledge edge 215 (labeled as “V”), indicating that the creator of the course flow relationship considers two knowledge entities 210 to be variants of each other to be selected according to defined conditions. For example, edge attributes of a variant knowledge edge 215 can determine which of alternative knowledge entities 210 to present to a student according to different styles of learning, different platform capabilities, different knowledge levels of the student, different student context, etc. As illustrated, knowledge entity 210a is shown as a prerequisite of knowledge entity 210b1 and knowledge entity 210b2, shown as variants. Depending on predefined conditions stored in the edge attributes, upon completion of knowledge entity 210a, the student will be presented with one of knowledge entity 210b1 or knowledge entity 210b2. For example, the creator of the course flow relationship desires that the student have certain knowledge going into knowledge entity 210c, but that prerequisite knowledge is not presented in knowledge entity 210a or knowledge entity 210b1. Accordingly, if the student is determined not to already have that prerequisite knowledge, the student may be presented with knowledge entity 210b2, which teaches that additional knowledge (as opposed to being presented with knowledge entity 210b1 that assumes prior knowledge of that information). As another example, the creator of the course flow relationship believes that students are more likely to grasp certain concepts in context of case studies tailored to their experiences. Accordingly, the creator formulates and selectively presents alternative versions of a knowledge entity based on a determined profile of the student (e.g., gender, age, likes or dislikes, etc.). Any suitable rationale or basis can derive the use of variation knowledge entities 210. While the variations are shown as separate knowledge entities 210, variations can also be implemented within a particular knowledge entity 210. For example, as described above, the node attributes of a knowledge entity 210 can include metadata that defines alternative forms of the knowledge entity 210 (e.g., different content) depending on certain characteristics, or the knowledge edges 215 can define variant knowledge entities 210 to present based on their edge attributes.

Though not shown, other edge types are possible. For example, some implementations include a “general case” knowledge edge 215, indicating that the creator of the course flow relationship considers a source knowledge entity 210 to be a general case of a destination knowledge entity 210 (and/or considers the destination knowledge entity 210 to be a special case of its source knowledge entity 210. For example, it may be understood that a particular concept is best learned through specific cases, so that a student's knowledge of a general concept may be determined by the student's apparent grasp of some quantity of those specific cases illustrating the general concept. Such a general case knowledge edge 215 can be used in multiple ways, for example, as a dynamic type of prerequisite generator, where the dependency is a function of student profile information. For example, a general case knowledge edge 215 can be used to link two knowledge entities 210—one as a general case of a sub-concept and the other as a specific case of the sub-concept—that are not dependent on each other (e.g., neither is a prerequisite of the other, and it is intended for the student to consume both knowledge entities 210 as part of consuming the course). For a student determined to be more of a “deductive” thinker, it can be desirable to present the general case as a prerequisite of the specific case (the student will likely have more success acquiring the general case first and be intuitively inclined to subsequently grasp the specific case); while for a student determined to be more of an “inductive” thinker, it can be desirable to present the specific case as a prerequisite of the general case (the student will likely have more success acquiring the specific case first and be intuitively inclined to subsequently grasp the general case).

In addition to type, source, and destination, the edge attributes can include other metadata that may relate to its type. For example, for a prerequisite knowledge edge 215, the metadata may define a “mastery threshold,” indicating a certain level of mastery required by the student in the source knowledge entity 210 before being eligible to start learning the destination knowledge entity 210.

In some embodiments, a course can be defined by generating virtual course boundaries around a set of existing knowledge entities 210 (e.g., and their knowledge edges 215). FIG. 12 shows an illustrative course datagraph macrostructure 1200 from which multiple courses are defined, according to various embodiments. The course datagraph macrostructure 1200 includes a particular course flow relationship among its knowledge entities 210, as defined by knowledge edges 215 between those knowledge entities 210. For example, as described above, each knowledge entity 210 can be related to one or more other knowledge entities 210 as a prerequisite, a variant, a special or general case, etc.

For the sake of illustration, the course data store can include many knowledge entities 210 authored by one or more course authors to present many different concepts with varying degrees of overlap. Suppose, for example, that there is a large number of knowledge entities 210 in the course data store that relate to the field of statistics. One or more course authors may desire to pull from the same set of knowledge entities 210 to create multiple statistics courses, for example, of different difficulty levels (e.g., basic versus advanced), for different types of students (e.g., different backgrounds, different courses of study, etc.), and/or for other reasons. Alternatively, different instructors in different institutions may desire to create their own versions of a statistics course by exploiting existing knowledge entities 210.

Embodiments permit a course author to create a course from an existing set of knowledge entities 210 by selecting a subset of the knowledge entities 210. Because of the data format of the course data macrostructure 1200, the selected knowledge entities 210 and their respective knowledge edges 215 can be compiled to provide students with a functional course flow. Thus, selection of the knowledge entities 210 can effectively form a virtual course boundary 1210 that defines a course. The course data macrostructure 1200 is illustrated to show two different courses with overlapping sets of knowledge entities 210. One course is illustrated by a dashed line that defines a virtual course boundary 1210. Another course is illustrated as the subset of knowledge entities 210 shown with a black background and white text (this is intended also to illustrate a virtual course boundary 1210, but is illustrated in a different manner to avoid overcomplicating the illustration).

In some embodiments, de-confliction may be involved in the course selection. According to some implementations, during course selection, a course authoring platform only permits a course author to select a valid set of knowledge entities 210. For example, when one knowledge entity 210 is selected, the course author may be instructed (e.g., required, prompted, flagged, etc.) to select all the prerequisite knowledge entities 210 to the selected knowledge entity 210; or all prerequisite knowledge entities 210 can be automatically selected. In certain implementations, the course author can override such de-confliction. For example, a course author may desire to start a course with an assumed level of prior knowledge, so that the course author does not desire to add some or all prerequisites to a particular knowledge entity 210. In other implementations, the de-confliction can be performed at compile time (e.g., after the course is created and an attempt is made to compile the course), or at any other suitable time.

According to such embodiments, a course can be defined effectively as a set of pointers to a selected subset of knowledge entities 210. When a student uses a course consumption platform to consume the course, the course flow is automatically defined by the knowledge edges 215 that relate the knowledge entities 210 to each other. Thus a large number of knowledge entities 210 stored in the course data store can be used to concurrently provide many different courses, each being dynamically adaptive to many students.

According to some embodiments, a non-transient course data store stores a course datagraph macrostructure having multiple knowledge entities 210 instantiated as nodes of the course datagraph macrostructure in the course data store. Each knowledge entity 210 can be linked with at least one other knowledge entity 210 in the course datagraph macrostructure by a respective knowledge edge 215 having a respective set of knowledge edge attributes that defines a course flow relationship between the knowledge entities 210. A set of processors (i.e., one or more) in communication with the course data store can implement a course authoring platform (e.g., course authoring platform 160 of FIG. 1) to receive authoring commands, translate the authoring commands to executable datagraph commands, and execute the datagraph commands.

Executing the datagraph commands can permit the course authoring platform to display selectable representations of the knowledge entities to a course author via an interface of the course authoring platform. For example, each knowledge entity can be represented as an interactive graphical element (e.g., a selectable icon) of a graphical user interface. The course authoring platform can receive a number of selections from the course author via the interface of the course authoring platform, such that the selections indicate one or more of the selectable representations. The selections can be made in any suitable manner with any suitable interface device. The selections can be compiled by the course authoring platform to define a virtual course boundary as virtual pointers to a course set of the knowledge entities identified according to the selections, such that the respective knowledge edges linking the course set of the knowledge entities define a course flow relationship within the virtual course boundary. For example, compiling the plurality of selections can include, for each selection: identifying a selected knowledge entity corresponding to the selection; identifying a related knowledge entity of the selected knowledge entity according to a respective knowledge edge that links the selected knowledge entity and the related knowledge entity in the course datagraph macrostructure (e.g., as a prerequisite, a variant, etc.); and automatically adding the related knowledge entity to the course set of the knowledge entities. Alternatively, compiling the plurality of selections can include, for each selection: identifying a selected knowledge entity corresponding to the selection; identifying one or more related knowledge entities of the selected knowledge entity, each as a source node of a respective knowledge edge having the selected knowledge entity as its destination node in the course datagraph macrostructure; indicating the identified related knowledge entities to the course author via the interface of the course authoring platform; receiving, from the course author via the interface of the course authoring platform in response to the indicating, a further selection indicating at least one of the related knowledge entities; and adding the indicated at least one of the related knowledge entities to the course set of the knowledge entities in response to receiving the further selection.

The virtual course boundary can be stored as a course definition in the course data store. For example, storing the course definition can permit the course to be offered to many students via respective course consumption platforms of those students (e.g., course consumption platforms 170 of FIG. 1). Using such course consumption platforms, the course datagraph macrostructure can be accessed in the course data store, along with the subset of knowledge entities defined by the stored course definition. At least some of the subset of knowledge entities can be displayed to a student via the course consumption platform in accordance with the course definition, a course flow defined by the respective knowledge edges of the subset of knowledge entities, and interaction commands received from the student via the course consumption platform. For example, as described herein, the datagraph structures can permit the course to be provided to the student in a manner that dynamically adapts to the student's knowledge level, profile, and/or other factors.

As described above with reference to FIG. 1, various functions are facilitated by exploiting novel types of course datagraph macrostructures 105 having course datagraph microstructures 130 embedded therein. FIG. 3 shows a portion of a course datagraph macrostructure 300 that includes an illustrative lesson datagraph microstructure 305 and an illustrative practice datagraph microstructure 350 embedded in a knowledge entity 210, according to various embodiments. The lesson datagraph microstructure 305 and the practice datagraph microstructure 350 can be implementations of the course datagraph microstructure 130 of FIG. 1. In general, each datagraph structure is implemented as a directed graph, which, as described above, includes a set of nodes connected by edges having associated direction. Each course datagraph macrostructure 300 (i.e., and the corresponding macrostructure 105 of FIG. 1 and macrostructure 200 of FIG. 2) is implemented as a directed acyclic graph, which is a directed graph that has no directed cycles (i.e., there is no way to follow a set of edges from a source node and end up back at the source node). In certain instances, the course datagraph macrostructure 300 can manifest various acyclic structures, such as multi-trees, poly-trees, rooted trees, etc. Each datagraph microstructure (e.g., lesson datagraph microstructure 305, practice datagraph microstructure 350, course datagraph microstructures 130 of FIG. 1, etc.) is also implemented as a directed graph, but may or may not be acyclic. For example, some lesson datagraph microstructures 305 loop back on themselves (e.g., depending on student interaction with the lesson step objects, as described below). In a particular knowledge entity 210, the lesson datagraph microstructure 305 and practice datagraph microstructure 350 do not have to manifest the same directed graph structure (e.g., they can have different numbers and types of nodes and/or edges, one can be acyclic when the other is not, etc.).

As illustrated, the lesson datagraph microstructure 305 can include a number of lesson step objects 310 linked together by lesson edges 315. Each lesson step object 310 includes lesson step object attributes 330, and each lesson edge 315 includes lesson edge attributes 340. Each lesson step object 310 can also include a set of (i.e., one or more) associated lesson responses 320. In some embodiments, each lesson step object 310 is linked to one or more other lesson step objects 310 through one or more of its lesson responses 320 via a respective lesson edge 315. Similarly, the practice datagraph microstructure 350 can include a number of practice step objects 360 linked together by practice edges 365. Each practice step object 360 includes practice step object attributes 380, and each practice edge 365 includes practice edge attributes 390. Each practice step object 360 can also include a set of associated practice responses 370. In some embodiments, each practice step object 360 is linked to one or more other practice step objects 360 through one or more of its practice responses 370 via a respective practice edge 365. In some implementations, each lesson datagraph microstructure 305 can effectively represent a particular sub-concept presented via its lesson step objects 310; and each practice datagraph microstructure 350 can effectively represent a practice question, or the like, presented via its practice step objects 360.

For example, the set of lesson responses 320 or practice responses 370 can represent multiple answer choices for a question posed as part of its lesson step object 310 or practice step object 360 (e.g., “The capital of Japan is: (a) Tokyo; (b) Kyoto; (c) Okinawa; or (d) None of the above”), as a prompt posed as part of its lesson step object 310 or practice step object 360 (e.g., “Are you ready to move on?” followed by a selection or text field; or “Click here to continue”), or any other suitable set of responses. Each lesson response 320 or practice response 370 can include any suitable response information, including response content and a response target. For example, the response content can include the text or other information to be presented to the student, and the response target can indicate what should happen if that response is selected by a student. For example, if a student is asked a true/false question having “True” as the correct answer, the lesson responses can show a “True” response and a “False” response to the student (i.e., those are the respective response texts). When the student selects “True,” the response target can include an indication of a correct response (e.g., by changing appearance (such as turning green), giving audio feedback (such as playing a chime), or giving other feedback (e.g., showing a special graphic, changing a score, etc.), etc.), and the response target may also automatically proceed to a next lesson step object 310 or practice step object 360 (or provide a second level of responses, such as a selection to proceed to the next lesson step object 310 practice step object 360, or to return to the (or any) previous lesson step object 310 or practice step object 360).

Each practice step object 360 or lesson step object 310 (or their respective practice responses 370 or lesson responses 320) can, in some implementations, be considered as a prompt of the system having static content resources (e.g., text, images, video, interactive animations, etc.) and/or dynamic content resources (e.g., content that adapts to other tags, contexts, etc.). For example, a dynamic content resource can include a special “congratulate” tag that translates in runtime into a specific intensity of congratulating the student for a correct answer depending on a streak of successes or a difficulty level of the question (e.g., “Wow, that's 3 correct answers in a row!! You're amazing!”). Some dynamic content resources include tags. The tags can facilitate conditional content insertions (e.g., a “snippet” to be displayed only on the first event in which the tag gets a rendering request); peripheral instructions (e.g., a feature tooltip to appear on mouse-over, or a countdown-timer to be used for the containing step); etc. Some implementations include a content chunk bank that stores “chunks” that can be inserted (e.g., using tags) to facilitate centralized updating and/or localization or translation. Each object or response can also be associated with metadata. Some such metadata can be used to create Boolean conditions. For example, a lesson step object 310 can be configured as “try again” lesson step object 310, so that there must be at least one “correct” lesson response 320 underneath the lesson step object 310, and any incorrect lesson responses 320 will cause the system to automatically bring the student (after reaching the end of the directed graph path by following the incorrect response) back to the try-again lesson step object 310. Other such metadata can define response style types (e.g., does the step accept free-text, or multiple-choice answers, or a slider, or checkboxes, or a numerical value, etc.). Other such metadata can include correlations to the mental attributes and performance factors; whether a user can expand the set of responses for an object after choosing one of the responses (e.g., for inspecting the other choices, viewing the “path not chosen.” etc.); defining whether a particular object is a “root object,” meaning it is the beginning of a flow (e.g., within a knowledge entity 210); showing a countdown timer for the object and/or which response should be the implied user choice when the time expires; whether to show an auto-diagnostic menu (e.g., if the user makes a mistake and/or the user gets the next response right); whether to show a time awareness tool for the object; etc. Some implementations permit multiple variations for each lesson step object 310 or practice step object 360, each of which having different associated metadata, in a way that can allow the flow algorithms to choose an optimal step variation per student in a given point in time.

Embodiments of lesson responses 320 and/or practice responses 370 can include any suitable possible choice by a student as a reaction to a particular lesson step object 310 or practice step object 360 prompt. The responses may or may not be limited by an instructor or creator of the response or object. For example, the response may allow for any free-form response, free-form responses only in a particular format, only a selection of a limited range of predetermined options, etc. A lesson response 320 or a practice response 370 can contain any of the types of rich content (e.g., dynamic content resources, metadata, etc.) as that of its lesson step object 310 or practice step object 360. In some implementations, the responses are permitted to have additional functionality to facilitate processing of various response types. For example, a response can include mathematical formulae or computer code (e.g., scripts, etc.) to define and/or carry-out rule sets for matching various user reactions to and/or interactions with the responses and associated objects (e.g., a response may contain code that defines a match if the student enters free-form text indicating any prime number greater than 100). The rich content of the responses can also consider contextual and/or other information separate from the particular student interaction with the response. For example, a response can be accepted as a match if its code determines at runtime that more than two minutes has elapsed since the preceding step was displayed, and no response has been explicitly selected (i.e., a “ran out of time” default can be set to pedagogically focus on timing strategy and solution approach, rather than on the core content). Some examples of response metadata include degree of correctness (e.g., ranging from totally incorrect to fully correct with a spectrum between); matching type (e.g. whether a strict match is required, or whether a free-response step should permit a match with differences in whitespace, punctuation, minor spelling errors, etc.); how to factor in the user's response time and/or confidence level with the correctness to generate an overall performance estimation; how to present the response after it gets clicked (e.g., whether to collapse the set of responses and leave the chosen response visible, to hide all, etc.); whether the response should be navigable as part of the “path not chosen” (e.g., this can be manual or automatic, such as if the subsequent steps are part of a long diagnostic flow and would not make sense for a stand-alone peek; etc.

In general, the lesson step objects 310 are intended to increase a student's level of knowledge of a certain concept, and the practice step objects 360 are intended to develop (e.g., and measure) a student's level of knowledge of a certain concept. For example, while lesson step objects 310 often include questions, review materials, and/or other content that can result in the student “practicing” a particular concept; the primary objective of the lesson step objects 310 is to add to the student's knowledge. Similarly, while a student's interaction with practice step objects 360 can certainly help a student learn underlying concepts, the primary objective of the practice step objects 360 is to further develop the student's understanding of previously seen concepts (e.g., and to measure the student's knowledge level with respect to those concepts). Still, much information about the student can be gleaned by monitoring the student's interaction with both lesson step objects 310 and practice step objects 360. As discussed herein, the embedded datagraph structures allow such monitoring to enhance course adaptations. For example, information gleaned from monitoring student interactions with the lesson step objects 310 can cause adaptations in presentation to that student (and/or to other students) of future lesson step objects 310 and/or adaptations in presentation of practice step objects 360 within that knowledge entity 210, can propagate up to cause adaptations in presentation to that student (and/or to other students) of other microstructures of the knowledge entity 210 and/or in other knowledge entities 210, can further propagate to alter profiles and/or other information relating to that student or other students (e.g., for use in characterizing a student's proclivities, learning style, preferred learning times or apparent attention span, etc.), can be used to measure effectiveness of a particular lesson step objects 310 or course object 360 (e.g., on its own, relative to others in its knowledge entity 210, relative to its course context, relative to alternative approaches to the same or similar concepts, etc.) or effectiveness of a particular contributor (e.g., how the creator of that course or object compares to himself or herself and/or to other contributors, etc.).

Some embodiments include additional functionality for lesson step objects 310 and/or practice step objects 360. For example, practice step objects 360 can be categorized and assigned to “baskets.” Each lesson step object 310 or knowledge entity 210 can contain one or more baskets, each having zero or more assigned practice step objects 360. Each basket can represent a difficulty level, learning type, or any other useful categorization. The basket assignment can be implemented at the knowledge entity 210 level (e.g., using the node attributes of knowledge entities 210 or edge attributes of knowledge edges 215), at the object level (e.g., using practice step object attributes 380 or practice edge attributes 365), or using course-level metadata or other metadata. For example, a basket can be implemented as a separate type of object or effectively as the result of attributes of other objects. The baskets can be used, for example, to dynamically and automatically generate a set of appropriate practice step objects 360 according to a student's interactions with related lesson step objects 310 and/or based on other information (e.g., a student's overall measured knowledge level, learning style, time elapsed since last review, etc.).

For the sake of illustration, FIGS. 4 and 5 show a portion of an example introductory course on geometry. FIG. 4 shows an illustrative screenshot 400 of a course datagraph macrostructure editing environment, according to various embodiments. The course datagraph macrostructure editing environment can be displayed on a course authoring platform 160, as illustrated in FIG. 1. The particular illustrated course datagraph macrostructure is intended to illustrate certain functionality of a particular implementation and is not intended to be limiting. As shown, the illustrative course datagraph macrostructure includes three knowledge entities 210: “Introductory Shapes”, “4-Sided Shapes”; and “Triangles.” The “4-Sided Shapes” and “Triangles” knowledge entities 210 are shown as part of a “Basic Shapes” group 410. Dashed box 415 indicates that a course author has selected the “Triangles” knowledge entity 210c.

In response to this selection, a window 420 can populate with information about the selected knowledge entity 210c. The window 420 can include any useful controls, information, menus, etc. The illustrated window 420 includes various controls and/or information relating to the knowledge entity 210c contents. For example, the window 420 shows controls, including “Publish” for publishing a new or edited version of the knowledge entity 210c to the course for visibility by students, “Preview” for allowing the course author to see what a student would see when interacting with that knowledge entity 210c, “Edit” for accessing various editing functions for the knowledge entity 210c (e.g., including adding lesson step objects to an embedded lesson datagraph microstructure), and “Add Practice Items” for associating the knowledge entity 210c with practice step objects in an embedded practice datagraph microstructure. The window 420 can also include controls and information relating to the knowledge edges associated with the knowledge entity 210c. For example, the window 420 shows that the knowledge entity 210c is “in the Basic Shapes group” and is “a dependant of 4-Sided Shapes” (e.g., the “4-Sided Shapes” knowledge entity 210b is a prerequisite to the “Triangles” knowledge entity 210c).

FIG. 5 shows an illustrative screenshot 500 of a lesson datagraph microstructure editing environment, according to various embodiments. For example, the lesson datagraph microstructure for the “Triangles” knowledge entity 210c is displayed to the course author in response to selecting to edit the “Triangles” knowledge entity 210c in the interface shown in FIG. 4. As shown, the lesson datagraph microstructure includes three lesson step objects 310 having associated lesson responses 320, linked together to form a directed graph that defines a lesson flow relationship. Some implementations include a start node 510 and an end node 520 for the lesson datagraph microstructure, for example, to help define the flow relationship and/or to make the flow relationship more intuitive for the author. The particular illustrated lesson datagraph microstructure is intended to illustrate certain functionality of a particular implementation and is not intended to be limiting.

As illustrated, the lesson datagraph microstructure can begin at the start node 510 and proceed to a first lesson step object 310a. The lesson step object 310a presents certain content (introductory and basic information about triangles) and prompt language, and includes associated lesson responses 320a. The lesson responses 320a provide the student with three different paths through the lesson: one that is more visual and links to lesson step object 310b, one that is more textual and links to lesson step object 310c, and one that effectively skips the lesson by linking to the end node 520. Following the link to lesson step object 310b, the student is now presented with a set of images of triangles and non-triangles, a prompt, and a set of lesson responses 320b relating to the prompt. Two of the three answer choices 530 presented in the lesson responses 320b are incorrect, and includes related content and a further interactive prompt to “try again.” The third answer choice 530c is correct; selecting that choice provides explanatory text and links to a next lesson step object 310c. Lesson step object 310c teaches more information about triangles and has no associated lesson responses 320. After it is determined that the student has consumed lesson step object 310c (e.g., after some time, after the student clicks a “continue” button, or in any suitable manner), the flow relationship can continue to the end node 520 to end the lesson.

Various implementations can permit different types of objects, responses, etc. For example, some implementations can require that every lesson step object 310 includes at least one lesson response 320 (e.g., lesson step object 310c would not be allowed in such implementations). Further, some implementations may permit or require objects that are global or external to the datagraph structures. For example, a user interface for course consumption may include various global controls, such as menus, navigation buttons, etc.; and/or course authors may be permitted to set up features in the user interface for course consumption, such as chat windows, file management areas, etc.

While FIG. 5 is described with reference to lesson step objects 310 of a lesson datagraph microstructure, some embodiments can implement practice step objects of a practice datagraph microstructure in a similar or identical fashion. The user interface and its functionality can be substantially the same for either type of datagraph microstructure, except that particular lesson- or practice-related functions can be included. For example, rather than presenting new information to the student as with lesson step objects, the practice step objects are intended to review and test that information.

FIG. 6 shows a block diagram of an illustrative course consumption environment 600 that includes a number of processor-implemented blocks for dynamically adapting course consumption to optimize a student's knowledge acquisition, according to various embodiments. The processor-implemented blocks can be used to measure knowledge levels of students with respect to concepts, as they proceed through an e-learning course, and to dynamically adapt aspects of the course to optimize knowledge acquisition of the students in accordance with their knowledge level. As described above, embodiments are described in context of an e-learning course that is implemented as a course datagraph macrostructure 105 having one or more knowledge entities 210, each knowledge entity 210 having one or more lesson datagraph microstructures 305 and one or more practice datagraph microstructures 350 embedded therein.

For example, some embodiments include a non-transient course data store (not shown) that stores a course datagraph macrostructure 105 having a knowledge entity 210 embedded therein. The knowledge entity 210 includes a lesson datagraph microstructure 305 that has a number of lesson step objects, each linked with at least another of the lesson step objects by a respective lesson edge that defines a lesson flow relationship between the lesson step objects. The knowledge entity 210 also includes a plurality of practice datagraph microstructures 350, each assigned a respective difficulty level, and each including a number of practice step objects, each practice step object linked with at least another of the practice step objects by a respective practice edge that defines a practice flow relationship between the practice step objects.

It can be assumed that the knowledge entity 210 is being consumed by a student (e.g., via a graphical user interface of a course consumption platform). Embodiments can include a processor-implemented knowledge entity adaptor 640 that is communicatively coupled (e.g., directly or indirectly) with the course data store and that identifies a next practice datagraph microstructure (illustrated as next selected item 645) to present to a student (e.g., via a processor-implemented course consumption platform). As described more fully below, the next selected item 645 can be determined as a function of a present knowledge level (e.g., an initial knowledge level or an updated knowledge level 635) associated with the student and as a function of the respective difficulty levels 605 of the practice datagraph microstructures 350. For example, the next selected item 645 can be selected to yield maximum information about the student's knowledge level regarding one or more sub-concepts of the knowledge entity 210.

Some novel functionality described herein adapts how a course is presented to a student based on the student's knowledge level. For example, as described further below, a student can be associated with an initial knowledge level for the knowledge entity 210, and the knowledge level can be impacted by the student's performance during consumption of the knowledge entity 210. Accordingly, some embodiments can include a processor-implemented knowledge level estimator 630 that is communicatively coupled with the course data store and that receives response data 650 from the student in response to displaying the next selected item 645 to the student. The knowledge level estimator 630 can calculate an updated knowledge level 635 for the student (e.g., after the student consumes each practice datagraph microstructure 350, or at any other suitable time) as a function of the response data 650, the present knowledge level associated with the student (i.e., the initial knowledge level, if the first time; or the updated knowledge level 635, if in a further iteration; as described below), and the difficulty level 605 of the next selected item 645. As used herein, the term “response data 650” is intended to include any suitable types of information relating to the content and manner of the student's response. For example, the response data 650 can include the correctness of response (e.g., whether the response is correct, most correct, partially correct, etc.), confidence level (e.g., a slider or other technique can be used to determine the student's confidence in his answer), time to answer the question (e.g., the amount of time the student spent before responding, which may be the amount of time, the amount of time normalized to that student's typical response time, etc.), a number of retries (e.g., if the student attempts the same practice item multiple times), a number of successes or mistakes in a row (e.g., if this is the first correct answer of its type after a string of mistakes, it may be more likely a lucky guess; while it if this is another in a series of correct responses, it may be more likely an intentionally correct answer, etc.), etc.

The initial knowledge level for the knowledge entity 210 can be determined in any suitable manner. For example, the initial knowledge level can be a default value (e.g., 0.5), a median value across a group of students, a median value for the particular student across a group of knowledge entities 210, etc. In one implementation, the initial knowledge level is determined by adjusting a median value according to the student's performance on other knowledge entities 210. For example, a group of knowledge entities 210 can be defined by a course author or implied based on other information (e.g., knowledge edge relationships, metadata, similar textual content in a summary, sharing of one or more practice items between multiple knowledge entities 210, etc.), and the student's performance on one knowledge entity 210 in the group can impact the knowledge level associated with the student for other knowledge entities 210 in the group. In some implementations, a student's prior performance on one knowledge entity 210 in the group can have a forward-looking impact on the initial knowledge level assigned to the student for a not-yet consumed knowledge entity 210 in the group. In certain implementations, a student's present performance on one knowledge entity 210 in the group can have a backward-looking impact on the knowledge level of the student for a previously consumed knowledge entity 210 in the group. For example, poor performance by the student on certain practice datagraph microstructures 350 shared among multiple knowledge entities 210 can indicate that a student has not retained the knowledge acquired from a previously consumed knowledge entity 210. In response to such determinations, embodiments can take any suitable step, such as suggesting or requiring that the student review, or even re-consume, the previously-acquired knowledge entity 210 (or certain portions thereof, alternate versions thereof, etc.); graphically indicating the apparent loss of retention, etc.

Some embodiments can also include a processor-implemented acquisition monitor 620 that tracks the updated knowledge level 635 to determine when the student can be considered to have “acquired” the knowledge entity 210. For example, a student can be considered to have acquired the knowledge entity 210 when the student's updated knowledge level 635 is determined to have reached (or exceeded) a predetermined threshold (e.g., a default threshold for that knowledge entity 210 or for all knowledge entities 210, such as a predetermined “mastery threshold”, a value of 0.85, etc.). In some implementations, the acquisition monitor 620 iteratively (i.e., repeatedly) directs the knowledge entity adaptor 640 to identify and present an appropriate next selected item 645, and directs the knowledge level estimator 630 to receive the response data 650 and calculate the updated knowledge level 635, until the knowledge entity 210 is acquired (e.g., until the updated knowledge level 635 for the student reaches a target knowledge level stored in association with the knowledge entity 210).

The knowledge level estimator 630 can calculate the updated knowledge level 635 in any suitable manner. In some implementations, each response to a practice datagraph microstructure 350 can increase or decrease the student's knowledge level by a preset amount (e.g., a fixed amount, a proportional amount, etc.), by a preset amount based on the difficulty level 605 of the next selected item 645, by a preset amount based on the updated knowledge level 635 of the student, etc. Some embodiments of the knowledge level estimator 630 calculates the updated knowledge level 635 for the student in such a way that a magnitude of change between the updated knowledge level 635 and the present knowledge level is inversely related to how closely aligned the difficulty level 605 of the next selected item 645 is with the present knowledge level of the student. For example, when the difficulty level 605 of the next selected item 645 indicates (according to the present knowledge level of the student) that the next selected item 645 should be very difficult for that student, a correct response by the student is likely to be a lucky guess, and an incorrect response by the student is not surprising; so that any response may provide relatively little information about the student's actual knowledge level. Still, if the student correctly answers multiple “too difficult” next selected items 645, this can collectively indicate that the calculated knowledge level for the student is not representative of the student's actual knowledge level. On the other hand, when the difficulty level 605 of the next selected item 645 is closely aligned with the knowledge level of the student (i.e., it appears to be of appropriate difficulty for this student at this time), a correct or incorrect response by the student is likely to be indicative of the student's knowledge of that concept; so that any response may provide relatively a lot of information about the student's actual knowledge level. As such, the magnitude of change between the updated knowledge level 635 and the present knowledge level can be inversely related to how closely aligned the difficulty level 605 of the next selected item 645 is with the present knowledge level of the student (i.e., a response to a next selected item 645 at a more appropriate difficulty level for that student at that time can cause the knowledge level estimator 630 to make a larger change to the value of the student's knowledge level).

In some embodiments, a student profiler 670 maintains student profile information 675 about one or more students. For example, the student profiler 670 can be a non-transient data store, a processor-implemented block, and/or any other suitable functional element. The student profiler 670 can be disposed centrally (e.g., in a remote computational environment, for example, along with the course data store), disposed locally (e.g., in a student's local computational environment), etc. In some implementations, the student profile information 675 can impact operations of the acquisition monitor 620. For example, the student profile information 675 can include information about which knowledge entities 210 have previously been consumed and/or acquired by the student, or student traits known or thought to have an impact on performance for a particular knowledge entity 210 (e.g., learning style, gender, prior knowledge, etc.). The acquisition monitor 620 can, in some instances, determine the initial knowledge level for a student based on such student profile information 675. For example, the initial knowledge level can be provided as an input knowledge level 625 to the knowledge level estimator 630 in the first iteration.

As described above, certain adaptive functionality is intended to operate in context of a knowledge entity 210 that has multiple embedded practice datagraph microstructures 350, and each practice datagraph microstructures 350 can have an associated difficulty level 605. Some implementations automatically initialize each practice datagraph microstructure 350 with a respective difficulty level 605 (e.g., a default value, such as 0.5 on a scale from 0.0 to 1.0). Other implementations permit course authors (or other authorized individuals) to assign a desired difficulty level 605 to each practice datagraph microstructure 350. Other implementations allow students and/or others to nominate, vote, suggest, or otherwise directly influence a determination of difficulty level 605 for practice datagraph microstructures 350. Other implementations, as described below, can compute appropriate difficulty levels 605 for the practice datagraph microstructures 350 based, for example, on statistical information relating to prior responses to those practice datagraph microstructures 350 by many students.

In some embodiments, while there is inadequate (e.g., insufficient, unreliable, untested, unverified, etc.) difficulty level 605 information for the practice datagraph microstructures 350 of a knowledge entity 210, certain measures are taken in an attempt to capture such information. For example, some implementations desire to acquire sufficient statistical information to assign a reliable difficulty level 605 to each practice datagraph microstructure 350. Prior to acquiring sufficient statistical information, embodiments can present students with a fixed set of practice datagraph microstructures 350 to determine whether the student has acquired the knowledge entity 210. For example, the student can be provided with a predetermined number of practice datagraph microstructures 350 selected at random, a number of practice datagraph microstructures 350 defined by the course author as a default set, a set of practice datagraph microstructures 350 that includes a desired range of difficulty levels 605 (even if those difficulty levels 605 are not reliable at that stage), or any other suitable set of practice datagraph microstructures 350. In such a context, the knowledge level of the student may be determined and/or updated only after all (or some subset) of the practice datagraph microstructures 350 have been consumed and response data 650 has been acquired (as opposed to updating the knowledge level after each practice datagraph microstructure 350 is consumed). For example, the student's knowledge level can be computed according to a percentage of correct responses provided to the practice datagraph microstructures 350 (e.g., and/or additional response data 650, such as response speed, response confidence, categorical aptitude, etc.).

Once a large enough sample of student responses have been received for the practice datagraph microstructures 350, statistical data can be generated from that data for use in computing appropriate difficulty levels 605. Some embodiments include a processor-implemented difficulty level estimator 610 that assigns the respective difficulty levels 605 to the practice datagraph microstructures 350. For example, a prior response dataset 615 can be generated from large numbers of prior student responses, and the prior response dataset 615 can be used by the difficulty level estimator 610 to compute the difficulty levels 605.

Some embodiments provide additional types of adaptations. Many of the novel types of adaptations described above are “entity-level” adaptations. For example, one category of entity-level adaptations described above can dynamically select practice datagraph microstructures 350 to present to a student (e.g., as a function of the difficulty levels 605 of the practice datagraph microstructures 350 and the knowledge level of the student) in an attempt to optimize the knowledge acquisition of the student. Another category of entity-level adaptations described above can adapt initial knowledge levels of students for a knowledge entity 210 based on computed knowledge levels of the students for previously-acquired (or consumed) knowledge entities 210; and/or adapt previously computed knowledge levels of the students for previously-acquired (or consumed) knowledge entities 210 based on presently computed knowledge levels of students for a presently acquired (or consumed) knowledge entity 210. For example, such adaptations can be particularly applicable in context of practice datagraph microstructures 350 (and/or lesson datagraph microstructures 305) that are shared between multiple knowledge entities 210).

Another category of adaptation is a course-level adaptation. Some embodiments include a processor-implemented course adaptor 660 to perform such adaptations. As described above, knowledge entities 210 can be linked in a course datagraph macrostructure 105 by different types of knowledge edges. One type of such knowledge edge is a variant edge that identifies two or more knowledge entities 210 as variants of each other. Variants can be used, for example, to present a same or similar concept at varying levels of detail (e.g., a statistics course may be presented at one level for mathematics majors and at a different level for business majors), to present a same or similar concept with different types of examples (e.g., depending on different learning styles, different demographics (gender, socioeconomics, politics, nationality, etc.), different contexts (e.g., relating to a particular course of study or identified interest, etc.), etc.), to present a same or similar concept by different instructors (e.g., where different course authors contribute knowledge entities relating to a similar subject), etc. In practice, as a student traverses a course datagraph macrostructure 105 (e.g., as a student consumes a course) and encounters a variant, embodiments can automatically select one of the variants to present to the student in a manner that seeks to improve the student's acquisition of the concept presented by the variants. For example, the variant can be selected based on profile information of the student and/or based on the course flow being consumed by the student (e.g., where the variants are part of multiple courses, where different students have different prior knowledge, etc.). In some contexts involving such variants, an overall course or group of knowledge entities 210 can behave in an appreciably consistent manner, regardless of which variant is selected. For example, the course flow can be substantially the same for all students in the course, even though different variants may be selected along the way.

While it can be desirable in some instances to adapt a course to a particular student by selecting an appropriate variant knowledge entity to present to that student, there may not be clear, a priori selection criteria. For example, in some instances, the course authors can indicate (e.g., in metadata of the knowledge entities or the edges) which criteria to use to select one variant over another (e.g., the course flow being consumed, the student's prior knowledge, student demographics, student learning style, etc.). In other instances, such information is not provided or may be unreliable (e.g., the course author has identified certain criteria, but it is desirable to verify the efficacy of that identification).

Where such a priori information is not available or not reliable, embodiments can seek to identify the most effective selection criteria from a sample set of previous responses. For example, acquisition effectiveness metrics can be generated for each of a set of variant knowledge entities (e.g., a dataset can be generated to associate acquisition effectiveness parameters for each variant (e.g., correctness of responses, speed or responses, confidence level in responses, etc.) with student profile information (e.g., initial assigned knowledge level, prior knowledge, course of study, demographics, etc.) for a large number of students that have acquired those variants). Embodiments can determine, as a function of the acquisition effectiveness metrics, multiple characteristic student profiles as yielding highest acquisition effectiveness for each variant (e.g., using machine learning to interpret the dataset to derive which student profile information tends to yield the highest acquisition effectiveness for each variant knowledge entity), and the determined characteristic student profiles can be assigned as selection criteria for each variant. When a student subsequently encounters the variant knowledge entities, embodiments can analyze profile data of the student to determine which of the characteristic student profiles more closely corresponds to the profile data of the student and can provide the student with a most appropriate one of the variants, accordingly.

FIG. 7 shows an illustrative computational system 700 for implementing one or more systems or components of systems, according to various embodiments. The computational system 700 is described as a particular machine for implementing course authoring functionality, like the course authoring platform 160 described with reference to FIG. 1. Embodiments of the computational system 700 can be implemented as or embodied in single or distributed computer systems, or in any other useful way.

The computational system 700 is shown including hardware elements that can be electrically coupled via a bus 755. The hardware elements can include one or more processors (shown as central processing units, “CPU(s)”) 705, one or more input devices 710 (e.g., a mouse, a keyboard, etc.), and one or more output devices 715 (e.g., a display, a printer, etc.). The computational system 700 can also include one or more storage devices 720. By way of example, storage device(s) 720 can be disk drives, optical storage devices, solid-state storage device such as a random access memory (RAM) and/or a read-only memory (ROM), which can be programmable, flash-updateable and/or the like. In some embodiments, the storage devices 720 are configured to store unpublished course data 722. For example, while course data (e.g., knowledge entities, microstructures, step objects, etc.) are being edited, unpublished or otherwise unofficial versions can be stored by certain implementations of a course authoring platform).

The computational system 700 can additionally include a communications system 730 (e.g., a modem, a network card (wireless or wired) or chipset, an infra-red communication device, etc.). The communications system 730 can permit data to be exchanged with a public or private network and/or any other system. For example, as shown in FIG. 1, the communications system 730 can permit the course authoring platform to communicate with a remote (e.g., cloud-based) course data store 140 via a public or private network 150 (shown in dashed lines for context). In some embodiments, the computational system 700 can also include a processing acceleration unit 735, which can include a DSP, a special-purpose processor and/or the like.

Embodiments can also include working memory 740, which can include RAM and ROM devices, and/or any other suitable memory. The computational system 700 can also include software elements, shown as being currently located within a working memory 740, including an operating system 745 and/or other code 750, such as an application program (which can be a client application, web browser, mid-tier application, relational database management system (RDBMS), etc.). As illustrated, a course authoring application 760 can be implemented in the working memory 740. Some implementations of the course authoring application 760 and can include an author interface 762. For example, the author interface 762 can provide graphical user interface (GUI) and/or other functionality for receiving authoring commands relating to creation, editing, publishing, and/or other authoring-related course functions. Some implementations of the course authoring application 760 and can include a datagraph compiler 764. For example, the datagraph compiler 764 can translate received authoring commands into datagraph commands for execution by the processor(s) 705 in interfacing with the datagraph structure for course authoring.

In some embodiments, the course data store 140 and/or the storage device(s) 720 implement a non-transient course data store that stores a course datagraph macrostructure having lesson datagraph microstructures embedded therein. The processor(s) 705 of the computational system 700 implement functions of a course authoring platform as the course authoring application 760, including receiving authoring commands (via the author interface 762), translating the authoring commands to executable datagraph commands (via the datagraph compiler 764), and executing the datagraph commands with the processor(s) 705. Executing the datagraph commands can include instantiating a number of knowledge entities as nodes of the course datagraph macrostructure, linking each knowledge entity with at least one other knowledge entity by instantiating a respective knowledge edge having a respective set of knowledge edge attributes that defines a course flow relationship between the knowledge entities; and embedding each knowledge entity with at least one of the lesson datagraph microstructures in the course data store by instantiating a number of lesson step objects as nodes of the lesson datagraph microstructure, defining a set of lesson responses associated with each lesson step object, and linking each lesson step object with at least one other lesson step object in the lesson datagraph microstructure by instantiating a respective lesson edge that defines a lesson flow relationship between the lesson step objects.

FIG. 8 shows another illustrative computational system 800 for implementing one or more systems or components of systems, according to various embodiments. The computational system 800 is described as a particular machine for implementing course consumption functionality, like the course consumption platform 170 described with reference to FIG. 1. Embodiments of the computational system 800 can be implemented as or embodied in single or distributed computer systems, or in any other useful way.

The computational system 800 is shown including hardware elements that can be electrically coupled via a bus 855. The hardware elements can include one or more processors (shown as central processing units, “CPU(s)”) 805, one or more input devices 810 (e.g., a mouse, a keyboard, etc.), and one or more output devices 815 (e.g., a display, a printer, etc.). The computational system 800 can also include one or more storage devices 820. By way of example, storage device(s) 820 can be disk drives, optical storage devices, solid-state storage device such as a random access memory (RAM) and/or a read-only memory (ROM), which can be programmable, flash-updateable and/or the like. In some embodiments, the storage devices 820 are configured to store student-specific data (e.g., student profile(s) 675). For example, some implementations of the course consumption platform can store profile information about one or more student.

The computational system 800 can additionally include a communications system 830 (e.g., a modem, a network card (wireless or wired) or chipset, an infra-red communication device, etc.). The communications system 830 can permit data to be exchanged with a public or private network and/or any other system. For example, as shown in FIG. 1, the communications system 830 can permit the course consumption platform to communicate with a remote (e.g., cloud-based) course data store 140 via a public or private network 150 (shown in dashed lines for context). In some embodiments, the computational system 800 can also include a processing acceleration unit 835, which can include a DSP, a special-purpose processor and/or the like.

Embodiments can also include working memory 840, which can include RAM and ROM devices, and/or any other suitable memory. The computational system 800 can also include software elements, shown as being currently located within a working memory 840, including an operating system 845 and/or other code 850, such as an application program (which can be a client application, web browser, mid-tier application, relational database management system (RDBMS), etc.). As illustrated, a course consumption application 860 can be implemented in the working memory 840. Some implementations of the course consumption application 860 and can include a student interface 862. For example, the student interface 862 can provide graphical user interface (GUI) and/or other functionality for receiving interaction commands relating to consuming knowledge entities, microstructures, step objects, etc. (e.g., student interactions with responses, timer data, etc.), and/or other consumption-related course functions. Some implementations of the course consumption application 860 and can include a datagraph compiler 864. For example, the datagraph compiler 864 can translate received interaction commands into datagraph commands for execution by the processor(s) 805 in interfacing with the datagraph structure for dynamic course generation, course adaptation, course display, etc. Some implementations of the course consumption application 860 and can include a student profiler 670. For example, the student profiler 866 can receive explicit profile information from a student (e.g., through a profile page, an external database, etc.), monitor student interactions with course materials to infer student profile information (e.g., learning styles, strengths, tendencies, etc.), and/or develop the student profile in any other suitable manner. In some embodiments, the course consumption application 860 implements functions of one or more other blocks described herein.

In some embodiments, the course data store 140 and/or the storage device(s) 820 implement a non-transient course data store that stores an e-learning datagraph structure. The datagraph structure can include a number of knowledge entities stored as nodes of a course datagraph macrostructure, each knowledge entity being linked with at least one other knowledge entity in the course datagraph macrostructure via a respective knowledge edge having a respective set of knowledge edge attributes that defines a course flow relationship among the knowledge entities. Each knowledge entity can include one or more lesson datagraph microstructure having one or more lesson step objects, each linked with at least another of the lesson step objects by a respective lesson edge that defines a lesson flow relationship between the lesson step objects; and one or more practice datagraph microstructures, each assigned a respective difficulty level, and each having one or more practice step objects, each practice step object linked with at least another of the practice step objects by a respective practice edge that defines a practice flow relationship between the practice step objects.

The processor(s) 805 of the computational system 800 implement functions of a course consumption platform as the course consumption application 860, including determining a profile of a first student (via the student profiler 670), and adaptively generating and displaying an e-learning course from the e-learning datagraph structure according to the profile of the first student (via the student interface 862 and the datagraph compiler 864). As described above, the e-learning datagraph structure permits dynamic adaptation of the course materials to each student. For example, a first instance of the computational system 800 can use its processor(s) 805 to implement a first instance of a course consumption platform that determines a profile of a first student and adaptively generates and displays a first instance of an e-learning course from the e-learning datagraph structure according to the profile of the first student; and a second instance of the computational system 800 can use its processor(s) 805 to implement a second instance of a course consumption platform that determines a profile of a second student and adaptively generates and displays a second instance of an e-learning course from the same e-learning datagraph structure according to the profile of the second student (i.e., where the second instance is different from the first instance).

It should be appreciated that alternate embodiments of computational systems 700 and 800 can have numerous variations from those described above. For example, customized hardware can be used and/or particular elements can be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices can be employed. In various embodiments a computational systems, like those illustrated in FIGS. 7 and 8, can be used to implement one or more functions of the systems described with reference to FIGS. 1, 6, and 13, and/or to implement one or more methods, such as those described with reference to FIGS. 9, 10, 11, 14, and 15.

Self-Constructing Content

In some cases, it is desirable to self-construct content to supplement content generated by a course author. For example, suppose a course author designs a knowledge entity to teach a particular concept, one or more lesson datagraph microstructures to teach sub-concepts, and a few practice datagraph microstructures to help test and/or reinforce the sub-concepts. As described herein, a number of features can be facilitated (or better utilized) by having a large number of practice datagraph microstructures at varying levels of difficulty, and ensuring that each practice datagraph microstructure has good response sets, explanations, diagnostics, etc. For example, providing large numbers of well-formulated practice datagraph microstructures can help facilitate knowledge level measurement and dynamic course adaptations that exploit such knowledge level measurements. Novel techniques are described herein for self-constructing (e.g., using the course backend processor 180 of FIG. 1, the course authoring platform 160 of FIG. 1, or any other suitable environment) to automatically generate such content.

FIG. 9 shows a flow diagram of an illustrative method 900 for self-constructing various types of content, according to various embodiments. Embodiments of the method 900 begin at stage 904 by self-constructing practice datagraph microstructures, so that each can have an a priori definition of at least one prompt and at least one correct response to the at least one prompt. For example, each practice datagraph microstructure can be considered as a practice question (practice item, practice entity, etc.) that has question text ending in a prompt for a response (e.g., “What is 2+2*2?”). Each self-constructed practice datagraph microstructure includes such a prompt and a correct response (e.g., “6” in the preceding example).

For the sake of illustration, suppose a course author has designed a course on basic subtraction, including various lesson datagraph microstructures to teach sub-concepts, such as subtraction fundamentals, borrowing, visualization techniques, word problem parsing strategies, etc. An effective manner of testing acquisition of, and of reinforcing, such sub-concepts may be to provide students with large number of subtraction problems to track and improve their knowledge level (e.g., as described with reference to FIG. 6). However, it may be cumbersome (e.g., inefficient, impractical, and even onerous) to require the course author to generate large numbers of practice datagraph microstructures. Accordingly, embodiments can self-construct practice datagraph microstructures based on authored templates. The course author can define (e.g., formulaically, algorithmically, etc.) parameters of the practice datagraph microstructures, and embodiments can self-construct many similar problems in a manner that varies in difficulty level. For example, the course author can design a template to generate subtraction problems in which two two-digit numbers are subtracted. Over a large number of self-constructed practice datagraph microstructures, some will likely have solutions that involve borrowing and/or other techniques, while others likely will not. In another example, suppose a course author has designed a course on verb tense, including various lesson datagraph microstructures to teach sub-concepts, such as present tense, past tense, etc. The course author can design a template to automatically generate multiple sentence examples (e.g., pulling sentences from a library, automatically constructing sentences based on natural language processing rules, etc.), to automatically identify the verbs in the sentence, and to randomly determine whether to change the tense of each identified verb.

For example, FIG. 10 shows a flow diagram of an illustrative method 1000 for self-constructing practice datagraph microstructures, according to various embodiments. Embodiments can begin at stage 1004 by compiling a practice entity template from a definition received from a course author. At stage 1008, a number of practice datagraph microstructures can be constructed from the practice entity template, so that each practice datagraph microstructure has an a priori definition of a respective prompt and a respective correct response to the respective prompt. In some implementations, it is desirable, not only to self-construct large numbers of practice datagraph microstructures, but also to get rid of ones that are ineffective at testing and/or reinforcing knowledge acquisition.

At stage 1012, the self-constructed practice datagraph microstructures can be presented to multiple students, and embodiments can compute whether each practice datagraph microstructure is an effective differentiator of student knowledge according to the knowledge levels of the students and their responses to the practice datagraph microstructure. For example, certain item response theory techniques, and/or other techniques, can be used to determine whether the practice datagraph microstructure is effective. One such technique involves plotting a probability that a student will correctly respond to the practice datagraph microstructure versus a student's knowledge level (i.e., given a particular measured knowledge level of a student at the time the practice item is presented to the student, how likely is the student to get the correct answer to the practice item). An effective differentiator of student knowledge can tend to exhibit a plot having a sharp transition from lower probabilities to higher probabilities at some knowledge level (i.e., students below a particular knowledge level clearly have difficulty with the practice item, while students above that knowledge level clearly do not), while an ineffective differentiator of student knowledge can tend to exhibit a plot having an irregular shape, or otherwise lacking a sharp transition.

Some embodiments can iterate through a number of stages, for each self-constructed practice datagraph microstructure, to determine whether the practice datagraph microstructure should be maintained or removed. At stage 1016, a determination can be made as to whether the practice datagraph microstructure is an effective differentiator of student knowledge (e.g., based on the computation at stage 1012). If not, the practice datagraph microstructure can be removed at stage 1020. If so, some implementations can make a further determination at stage 1024 as to whether there is some other reason to remove the practice datagraph microstructure. For example, it can be desirable to ensure that the full set of practice datagraph microstructures falls within a reasonable range of difficulty levels (e.g., those falling outside an acceptable range can be removed or flagged). If there is some other reason to remove the practice datagraph microstructure, the practice datagraph microstructure can be removed at stage 1020. If not, the practice datagraph microstructure can be retained at stage 1028.

At stage 1032, the method 1000 can iterate until no more practice datagraph microstructures remain to be evaluated. In some embodiments, when no more practice datagraph microstructures remain to be evaluated, the method 1000 can end. In other embodiments, when no more practice datagraph microstructures remain to be evaluated, a further determination can be made as to whether more practice datagraph microstructures are needed or desired at stage 1036. For example, the culling process of stages 1016-1032 may have removed too many practice datagraph microstructures, and the method 1000 can return to stage 1008 to generate more. Further, it can be desirable to ensure that the set of practice datagraph microstructures exhibits a good distribution across knowledge levels (e.g., so that, ideally, embodiments can always find another practice datagraph microstructure to present to a student at a particular knowledge level when needed); and, if not, it can be desirable to add more practice datagraph microstructure. Otherwise, the method 1000 can end.

Returning to FIG. 9, at stage 908, embodiments can provide another type of self-constructed content by self-constructing sets of incorrect responses for each of a number of practice datagraph microstructures, according to arbitrary interactions received from multiple students in response to presenting the students with the prompts of the practice datagraph microstructures. In some embodiments, self-construction of sets of incorrect responses can be applied automatically to all self-constructed practice datagraph microstructures. In other embodiments, self-construction of sets of incorrect responses can be applied to any existing practice datagraph microstructure, whether self-constructed or authored, and the self-construction can be performed automatically (e.g., in response to detecting that the practice datagraph microstructure includes fewer than a desired number of incorrect responses, that the existing set of incorrect responses is sub-optimal based on feedback, etc.) or on-demand (e.g., in response to performing such self-construction by a course author or other role). For example, as illustrated by stage 910, embodiments of stage 908 can be performed with respect to any practice datagraph microstructures, regardless of whether it is authored or self-constructed.

For example, FIG. 11 shows a flow diagram of an illustrative method 1100 for self-constructing incorrect responses, according to various embodiments. Embodiments of the method 1100 begin at stage 1104 by providing a course datagraph macrostructure to course consumption platforms, the course datagraph macrostructure having a knowledge entity embedded therein, the knowledge entity including a practice datagraph microstructure having a priori definition of a prompt and a correct response to the prompt (e.g., according to the method of FIG. 1000). It is assumed that the practice datagraph microstructure lacks a priori definition of incorrect responses to the prompt (e.g., there are no, or too few, incorrect responses identified for the practice item). Such a practice datagraph microstructure is configured (e.g., manually, by default, or automatically in response to detecting the lack of a priori definition of incorrect responses) a priori to receive an arbitrary interaction from a student in response to presenting the prompt to the student. As used herein, an “arbitrary response” is intended to include any suitable type of response to a prompt other than a multiple choice selection. For example, the arbitrary response can be free form text input, voice input, drawing input, etc. Notably, the input can be considered as “arbitrary” even where there are appreciable limits on the type of input permitted (e.g., a prompt may be configured only to permit a single-digit numeral to be entered, but it is still considered “arbitrary” as the input is not restricted to a selection of defined multiple choice options).

At stage 1108, arbitrary interactions are received from multiple course consumption platforms in response to presenting the prompt to each respective student via each course consumption platform. Some implementations are only concerned with arbitrary interactions determined to be incorrect with respect to the defined correct response to the prompt. The remainder of the method 1100 can be used to self-construct a set of incorrect response options (e.g., as multiple choice options) based on a large number of the received arbitrary interactions. Notably, depending on the type of practice datagraph microstructure, constraints on the prompt, permitted types of arbitrary interactions, etc., embodiments may process (e.g., analyze, parse, filter, perform natural language processing on, etc.) the arbitrary interactions to extract a “response” to the prompt. For example, if the prompt is “What is 2+2*2?”; and the arbitrary interaction is free form text, reading “The answer is eight”; embodiments can use various techniques to translate the arbitrary interaction into a response of “8”.

For example, matching of an arbitrary interaction to a particular response can involve “strict” matching (e.g., for a particular text string, mathematical operator, etc.) or “fuzzy” matching. In some implementations of “fuzzy” matching, the arbitrary interaction can be compared to each of a number of expected or otherwise predetermined responses, while ignoring elements that are not the essence of the response (e.g., white space, punctuation, text in a mathematical response, etc.). Other “fuzzy” matching can include, where appropriate, matching a term when a synonym is used, matching a mathematical expression based on computed value or to their non-numerical equivalents (such as algebraic equivalents), etc. Some implementations can provide feedback to the student when an arbitrary interaction cannot be parsed into a useful response. In such cases, the student can be prompted to reenter the response, to select from a set of responses, etc. (some implementations can still record the arbitrary interaction for potential later use). As described below, the arbitrary interaction can be used in various ways. For example, when an arbitrary interaction is determined to be a response (e.g., by determining that the arbitrary interaction can be parsed into an appropriate response, after receiving an indication that the arbitrary interaction should be added as a valid response by the student or a moderator, etc.), the arbitrary interaction can be confirmed as a match (e.g., it can be added in its raw or a modified form as a new variation to allow auto-matching of similar future arbitrary interactions), the arbitrary interaction can be rejected as a match (e.g., and optionally indicated as matching some other response), etc. As described herein, treatment of the arbitrary interaction can impact the student's knowledge level, reputation score, etc. While many implementations imply that only an arbitrary interaction prompt is provided, such a prompt may be provided in some instances in response to some other interaction. For example, a student may be provided with a number of multiple choice options, and one option can read “None of these options are correct.” In response to selecting that option, the student may be provided with an arbitrary interaction prompt to provide a free form response.

In some implementations, a determination is made at stage 1110 as to whether a sufficient amount of data is available to self-construct sets of incorrect responses. For example, a sufficient amount of data can be determined as when a minimum threshold population has provided their respective arbitrary interactions, when a minimum threshold number of arbitrary interactions has been received, when a minimum threshold number of arbitrary interactions determined to be different has been received (e.g., where each arbitrary interaction is processed to determine whether it is identical, or substantially similar, to a previously received response to consider it as another instance of the same response), etc.

At stage 1112, embodiments can process the plurality of arbitrary interactions to generate a ranked set of proposed responses. In some implementations, each arbitrary interaction is processed to determine whether it is identical, or substantially similar, to a previously received response to consider it as another instance of the same response. The rankings can then be determined according to the relative frequency of each response. In other implementations, other analytics can be used to evaluate different types of arbitrary interactions, such as determining how close the student's incorrect response (derived from the arbitrary interaction) is to the defined correct response.

At stage 1116, the practice datagraph microstructure can be adapted in the course data store to define a number of self-constructed incorrect responses to the prompt according to the ranked set of proposed responses. The number of self-constructed incorrect responses can be defined in any suitable manner. For example, the set of incorrect responses can be generated to include the most likely incorrect responses, to include some number of more apparent incorrect responses (e.g., one or more incorrect response options that should be clearly incorrect to a student having any grasp of the concept, determined for example, as a received response having very low frequency or determined to be very far from the correct response), to include a high diversity of responses, etc. Further, the number of incorrect responses defined can be chosen in any suitable manner, for example, according to the number of different responses received from the arbitrary interactions, according to the number of different close-to-correct responses received from the arbitrary interactions, according to a predefined number of incorrect responses desired for each practice datagraph microstructure, etc.

In some embodiments, the adapted practice datagraph microstructure is reconfigured in the course data store to act differently when consumed by a student. For example, at stage 1120, embodiments can reconfigure the adapted PDM to present a set of response options in association with presenting the prompt, so that the set of response options includes the correct response and the set of self-constructed incorrect responses. For example, prior to self-constructing the set of incorrect responses, the practice datagraph microstructure presents the prompt with an input field for receiving an arbitrary interaction (i.e., this is how the practice datagraph microstructure is presented to the student during course consumption, when compiled prior to stage 1120). Subsequent to the reconfiguring at stage 1120, the practice datagraph microstructure presents the prompt with a set of response options that can be selected by the student. At stage 1124, the adapted PDM can be further reconfigured to receive a selection of one of the set of response options in response to presenting the prompt. For example, the response options are presented in a selectable manner via the student's course consumption platform.

Returning to FIG. 9, at stage 912, some embodiments can further self-construct a set of proposed explanations for each of a number of incorrect responses, according to rationale data received from students in association with student selections of ones of the sets of incorrect responses in response to presenting the students with practice datagraph microstructures. In some embodiments, self-construction of proposed explanations can be applied automatically to all self-constructed sets of incorrect responses. In other embodiments, self-construction of proposed explanations can be applied to any existing correct or incorrect responses of any practice datagraph microstructures, whether self-constructed or authored, and the self-construction can be performed automatically (e.g., in response to detecting a lack of explanation, that the existing explanation is sub-optimal based on feedback, etc.) or on-demand (e.g., in response to performing such self-construction by a course author or other role). For example, as illustrated by stage 914, embodiments of stage 912 can be performed with respect to any sets of responses, regardless of whether they are authored or self-constructed.

In some embodiments, it is desired, not only to use practice datagraph microstructures to test a student's grasp of a sub-concept, but also to better understand why a student is confused or failing to grasp a concept, and to use such understanding to further optimize knowledge acquisition. To this end, some implementations of the self-construction of explanations at stage 912 seek to develop explanations to responses according to student feedback. For example, after a student selects a response (from the set of response options, or even when arbitrary interactions are permitted for responses), and prior to telling the student whether the response was correct or incorrect, embodiments can provide an additional prompt seeking a student's rationale for making a particular selection (e.g., “Why did you choose that response?”). In a similar manner to the self-construction of sets of incorrect responses, some embodiments can gather large numbers of such responses as arbitrary interactions, and the arbitrary interactions can be analyzed to form a set of rationales used by different students to make their selection (e.g., of both correct and incorrect responses, of only incorrect responses, etc.). The set of rationales can be used to self-construct proposed explanations, which can be culled, filtered, ranked, etc. In some implementations, one or more self-constructed proposed explanations can be presented to the student automatically in response to a future selection of a corresponding incorrect response (e.g., a highest ranking proposed explanation can be presented, a set of multiple proposed explanations can be presented, etc.). In other implementations, the set of rationales can be provided to a course author (e.g., or a moderator, etc.) for use in authoring one or more corresponding explanations.

In some embodiments, a set of students can be identified as “trusted,” “advanced,” etc. with respect to a particular subject area, sub-concept, or the like. For example, a particular student can be identified as having a very high knowledge level with respect to a particular knowledge entity, as having previously completed a course with high achievement, etc. Such students can be given the opportunity to provide proposed explanations to correct and/or incorrect responses to practice datagraph microstructures (e.g., instead of providing such opportunity to all students during a self-construction phase, or even after explanations have been generated as a technique for improving such explanations or obtaining additional explanations). In some implementations, those and/or other students are given the opportunity to rate responses, explanations, etc. to further rank, cull, and/or otherwise improve knowledge acquisition.

Some embodiments of the method can self-construct a set of diagnostics corresponding to one or more set of explanations at stage 916. For example, as illustrated by stage 914, embodiments of stage 916 can be performed with respect to any set of explanations, regardless of whether they are authored or self-constructed. The diagnostics can seek to adapt explanations to rationales. Some implementations can further determine a most appropriate one of the explanations to present based on feedback and profile information. For example, when presenting a student with an explanation, students can also be presented with a prompt to determine whether they found the explanation to be useful. Over time, a student's knowledge level, profile information, etc. can be used to determine the student's similarity to students who previously selected a particular response, and one of multiple explanations for that response can be provided to the student, accordingly. For the sake of illustration, if the prompt is “What is 2+2*2?”, the student may incorrectly choose the answer “8” because of a basic lack of understanding of addition or multiplication, because of a lack of understanding of order of operations, because of a typographical error or careless mistake, or because the student does not know that “*” means multiplication. Alternatively, the student may choose the correct response because of an understanding of all the involved concepts or because of a lucky guess. In each case, providing the student with the correct feedback can be important in optimizing the student's acquisition of knowledge.

Some embodiments are described above as having certain functionality prior to self-construction and different functionality after. For example, implementations involving self-constructed responses are described as showing an arbitrary input prompt prior to the self-construction and showing a set of responses after. Notably, embodiments can also include an intermediate stage, during which some self-construction has occurred, but more is still desired. For example, suppose it is desirable to have at least 5 response options, including the correct response, for a particular practice datagraph microstructure (e.g., or one of its practice step objects). After some time, self-construction yields two response options (e.g., or two that have been determined as useful or effective), so that there is now a total of three response options for that practice item. Embodiments can continue to provide one option as “Other” (or “None of these answers matches my response,” or the like) to continue gathering arbitrary responses. Alternatively, such an arbitrary response prompt can continue to be provided even after a full set of acceptable response options is available.

Further, while some embodiments are described above as having a single correct answer, other embodiments can have multiple correct (or even partially correct) answers. For example, a practice item can be authored with at least one correct response (a priori) to ensure that there is some way to determine whether a student is correct prior to self-constructing additional responses. However, the self-construction techniques can be used to generate both incorrect responses and additional correct (or partially correct) responses, and some or all of those can be presented to students as part of a set of response options. In some instances, the set of response options presented to the student can be adapted to the student. For example, because one motivation of the practice items is to further reinforce and develop a student's knowledge of a concept, the presented response options can be tailored to yield a desired effect (e.g., to help the student parse subtleties in options that appear very similar, to determine a correct response from obviously incorrect alternatives, etc.). Similarly, providing a different set of response options for the same practice item can impact the difficulty level of the practice item, which can permit an additional level of adaptability.

Dynamic Contribution Valuation

As described herein, embodiments permit many levels of content to be contributed by many different authors and/or consumed by many different students. For example, a single course can be authored by multiple authors (e.g., under the direction of one or more academic directors, program managers, etc.) and/or by self-construction (e.g., as described above); and various items of content can be consumed and/or acquired by multiple students, for example, as the course dynamically adapts to the students. Further, as described above, many types of content can be reused and/or repurposed, for example using virtual course boundaries.

In such environments, it can be desirable to dynamically compute the value of those content items to optimize the pricing of courses to students and/or to provide appropriate compensation to contributors of those items. In traditional environments involving content generation from multiple contributors, consumers tend to pay relatively fixed amounts per content, and contributors tend to get paid relatively fixed amounts, regardless of the value of the content to the consumer. Multiple techniques are described herein for determining effectiveness of different types of content items on knowledge acquisition of a student. Embodiments use such techniques to re-price content and/or to re-value compensation based on content effectiveness.

FIG. 13 shows a block diagram of an illustrative dynamic content valuation environment 1300, according to various embodiments. As illustrated, content items can be considered in their datagraph hierarchy. According to some embodiments, as described above, a non-transient course data store stores a course datagraph macrostructure having a plurality of knowledge entities 210 instantiated as nodes of the course datagraph macrostructure in the course data store, and each knowledge entity 210 is linked with at least one other knowledge entity 210 in the course datagraph macrostructure by a respective knowledge edge having a respective set of knowledge edge attributes that defines a course flow relationship between the knowledge entities 210. For example, courses 1305 can be considered as collections of knowledge entities 210 (e.g., defined by virtual course boundaries in a course datagraph macrostructure); knowledge entities 210 can be considered as collections of their embedded microstructures (lesson datagraph microstructures 305 and their embedded practice datagraph microstructures 350); microstructures can be considered as collections of their step objects (practice step objects 360 and lesson step objects 310); and step objects can be considered as collections of their responses 320,370 (and/or explanations, diagnostics, etc.). Further, some content items (e.g., some lesson datagraph microstructures 305 and practice datagraph microstructures 350) can include additional authored content, such as drawings, photographs, graphs, videos, simulations, etc.). Each of these types of content items can be valued on its own, according to its sub-items (e.g., lesson datagraph microstructures 305 and practice datagraph microstructures 350 can be considered as sub-items of a knowledge entity 210, according to its dependent items (e.g., the effect of a prerequisite knowledge entity 210 on other knowledge entities 210 for which it is a prerequisite), etc.

Embodiments include a scoring processor 1310 that is in communication with the course data store. The scoring processor 1310 can be implemented in any suitable manner, for example, as part of the course backend processor 180 of FIG. 1. The scoring processor 1310 operates to compute a consumption score 1320 for each content item and to compute an effectiveness score 1315 for each content item. Implementations compute the scores differently for different types of content items. For example, the scoring processor 1310 can maintain and/or obtain information indicating a size of a population that has consumed and/or acquired a particular microstructure (e.g., a lesson datagraph microstructure 305 or practice datagraph microstructures 350). The population can be determined as a number of students that has acquired the microstructure, the number of students that has consumed the microstructure divided by a total number of students (e.g., total subscribers to the environment, total number of students in a particular course of study, total number of students taking a particular course (where the course adapts to students, so that not all students taking the course are provided with the same set of microstructures), etc.). For lesson datagraph microstructures 305, the scoring processor 1310 can compute the effectiveness score 1315 according to an average impact of the lesson datagraph microstructure 305 on knowledge level (e.g., on its own or with respect to the overall change in knowledge level over the knowledge entity 210 in which the lesson datagraph microstructure 305 is embedded). For practice datagraph microstructures 350, the scoring processor 1310 can compute the effectiveness score 1315 according to the apparent ability of the practice datagraph microstructure 350 to distinguish students. For example, as described above, various techniques can be used to determine whether a practice datagraph microstructure 350 is an effective differentiator of student knowledge according to the knowledge levels of the students and their responses to the practice datagraph microstructure 350. In some implementations, the microstructure can be further scored according to its sub-items. For example, each response 370 for a practice datagraph microstructure 350 can be scored according to the number of students choosing the response (its consumption score 1320) and a value of the explanation associated with the response (its effectiveness score 1315).

Similarly, embodiments of the scoring processor 1310 can calculate a consumption score 1320 for each knowledge entity 210 as a function of a size of a population of students that has consumed the knowledge entity 210. And embodiments of the scoring processor 1310 can compute an effectiveness score 1315 for each knowledge entity 210 as a function of a knowledge level impact of the knowledge entity 210 on the population of students. For example, the knowledge level impact can be an average change in knowledge level of a student over the course of acquiring the knowledge entity 210 (e.g., including normalizing the change, including monitoring the speed of acquisition, etc.). Certain content items can be scored in particular ways. For example, some implementations can score a course 1305 only as a roll-up of the values of its sub-items (e.g., where virtual boundary definition is common, the course 1305 itself may include vary little dedicated content). As another example, as described above, edges of the datagraphs can link nodes in ways that manifest dependency (e.g., prerequisite knowledge entities 210, etc.). In such cases, the score of a particular node can be further a function of the impact of the node on its dependent nodes. For example, if a particular prerequisite knowledge entity 210 does a very good job of preparing a student for a dependent node (i.e., for which it is a prerequisite), the effectiveness score 1315 for that prerequisite knowledge entity 210 can be increased.

Embodiments can further include a dynamic valuation processor 1340 that is in communication with the scoring processor 1310. Having computed scores for the various content items with the scoring processor 1310, embodiments can assign a monetary value to each content item as a function of its computed consumption score 1320, its computed effectiveness score 1315, and other information (e.g., a stored contribution value mapping). For example, the dynamic valuation processor 1340 can include a stored mapping (e.g., for each type of content item) that maps effectiveness score 1315 and consumption score 1320 for a content item to a particular monetary value (e.g., a cash value, or any suitable analog of cash value, such as loyalty points, discounts, etc.). The assigned monetary values can be used to dynamically price courses (e.g., or any suitable collection of content items) and/or to determine an appropriate compensation level (e.g., or rating, level, etc.) of a content item contributor. For example, in the latter case, the dynamic valuation processor 1340 can determine a compensation score 1325 for a particular contributor by determining the contribution level for each content item (e.g., did the contributor contribute to a particular item, and, if so, at what percentage (e.g., where there were joint contributors), etc.), by type of contribution (e.g., in what role, for example, as an author, moderator, etc.), by type of content item (e.g., contribution of a knowledge entity 210 may be considered as more valuable than contribution of an explanation for a response to a practice datagraph microstructure 350), etc. Further, some pricing and/or compensation structures can consider additional type of information, such as whether sufficient data is available to perform a useful dynamic valuation. For example, default pricing and/or compensation can be used until enough students have consumed content items, and the dynamic valuation can be performed only after that threshold consumption has occurred.

FIG. 14 shows a flow diagram of an illustrative method 1400 for dynamic pricing of e-learning content items, according to various embodiments. Embodiments begin at stage 1404 by pricing a course at a default price. For example, a default course price can be determine in any suitable manner and can be assigned to the course prior to dynamic pricing. At stage 1408, a determination can be made as to whether there is sufficient data for dynamic re-pricing (e.g., if a sufficiently large population of students has consumed the course and/or enough of its sub-items). If not, the course can continue to be offered at its default price. If so, the method 1400 can continue with dynamic re-pricing.

At stage 1412, embodiments can determine “sub-microstructure” values as a function of their respective consumption scores and effectiveness scores, and roll up those values to their respective microstructure level (e.g., a collection of responses can be considered as contributing value to their associated microstructure). For example, sub-microstructure values can include step objects, responses, explanations, etc. At stage 1416, embodiments can determine microstructure values as a function of their respective consumption scores, effectiveness scores, and rolled up sub-microstructure values. The microstructure values can be rolled up to their respective knowledge entity level. At stage 1420, embodiments can determine knowledge entity values as a function of their respective consumption scores, effectiveness scores, and rolled up microstructure values. The knowledge entity values can be rolled up to their respective course level. At stage 1424, embodiments can determine course value as a function of its consumption score, effectiveness score, and rolled up knowledge entity values. At stage 1428, the course can be re-priced according to its determined course value. For example, the dynamic re-pricing can be performed on-demand (e.g., by explicitly executing a routine), periodically, or at any suitable time, and the method 1400 can return a suggested new price. A particular role (e.g., a project manager, etc.) can determine whether to accept or ignore the new pricing. Some implementations can include publishing or otherwise indicating the course pricing to students via their respective course consumption platforms (or via any other interface).

FIG. 15 shows a flow diagram of an illustrative method 1500 for dynamic compensation of contributors to e-learning content items, according to various embodiments. Embodiments begin at stage 1504 by setting a compensation amount to a default schedule for a contributor. For example, each contributor type, each type of contribution, each level of contribution, etc. can be assigned a default (or negotiated) rate. At stage 1508, a determination can be made as to whether there is sufficient data for computing a dynamic compensation. For example, the default rate can be used until a sufficiently large population of students has acquired contributed items from that contributor (which can, in some implementations, involve a pre-computation of which items were contributed by a particular contributor, at what level, etc.). If not, the content items can continue to be offered, and the contributor can continue to be paid at the current level. If so, the method 1500 can continue with dynamic re-compensation.

At stage 1512, embodiments can identify a set of contributed items and a contribution level associated with the contributor. For example, it can be determined that a particular contributor has authored a number of practice datagraph microstructures and a number of lesson datagraph microstructure, helped author a large number of knowledge entities, moderated a large number of explanations, etc. Each type of contribution can be assigned a default or negotiated rate based on a predetermined schedule (e.g., a contract). At stage 1516, embodiments can compute a consumption score and an effectiveness score for each contributed item (e.g., using the scoring processor 1310 of FIG. 13). At stage 1520, embodiments can determine a compensation value for each contributed item as a function of its respective consumption score, effectiveness score, and a predetermined compensation value map (e.g., using the dynamic valuation processor 1340 of FIG. 13). Some determinations of compensation value can include computing a compensation score, for example, as a function of contribution level, contribution type, etc. At stage 1524, embodiments can adjust payment amount for contributor according to the total of the compensation values. For example, the contributor compensation can be determined as a function of an aggregate of the respective monetary values assigned to each of the set of contribution items, weighted by the contribution level of the contributor for each of the set of contribution items. The dynamic compensation can be performed on-demand (e.g., by explicitly executing a routine), periodically, or at any suitable time, and the method 1500 can return a suggested new compensation. A particular role (e.g., a project manager, etc.) can determine whether to accept or ignore the new compensation level. Some implementations can include publishing or otherwise indicating the revised compensation to contributors via their respective course authoring platforms (or via any other interface).

While some of the above description assumes that value rolls up, some embodiments can permit values to trickle down. For example, based on various types of dynamic adaptations, multiple students consuming the same course may encounter different knowledge entities, different microstructures within those entities, different step objects within those microstructures, different responses and/or explanations for those step objects, etc. Accordingly the consumption score of a particular content item can depend on its relationships to parent content items. As another example, different parent items can have particular value (e.g., according to their consumption and/or effectiveness scores, according to an associated task having been assigned a workflow value, etc.), and that value can impact the value of its child entities. Other scoring and valuation techniques can be used in other implementations.

Content Development and Moderation Flow

As described above, a single course can include many different types of content items, potentially authored or otherwise impacted by many different contributors. In general, contributors can be categorized as directors, authors, and moderators. Directors can include project managers (e.g., who oversee project plans, critical paths, resource allocations, etc.), academic directors (e.g., who oversee course content and academic quality), and/or others. Authors can include sole or joint creators of any content items that are included in the course datagraph macrostructure, such as knowledge entities, lesson datagraph microstructures, practice datagraph microstructures, responses, explanations, etc. In some embodiments, authors can also include various types of ancillary asset producers, such as creators of aesthetic and/or other supplemental content (e.g., sketches, models, simulations, videos, fonts, graphic designs, etc.). Moderators can include proactive editors (e.g., proofreaders, fact checkers, guideline reviewers to ensure conformity and quality, etc.), reactive editors (e.g., receivers of feedback from embodiments that may suggest certain action is warranted), etc. In many environments, particular with larger teams and/or more complex courses, it can be desirable to ensure that tasks are being properly assigned to appropriate roles; that those tasks are being completed in a timely, efficient, and high-quality manner; and that the team can respond quickly and accurately to various types of feedback and/or external changes (e.g., changes in a curriculum). Further, it can be desirable for contributors (e.g., new contributors, contributors moving into a new role or contribution type, etc.) are vetted quickly and accurately (e.g., accepting a new contributor's work with as few iterations as possible, while mitigating the inclusion of sub-standard content in a course).

Accordingly, embodiments include novel workflows for both content development (authoring) and moderation. For example, FIG. 16 shows a flow diagram of an illustrative content development workflow 1600, according to various embodiments. Embodiments begin at a master workflow design phase 1604, in which a high-level view of a course is developed and refined. In one implementation, a high-level view of course development can effectively be a collaboration between a program manager and an academic director. For example, the academic director can oversee design of the academic aspects of the course to ensure that all the desired topics and sub-topics are being presented in a proper order, with proper prerequisites, etc. The program manager can then convert the specification from the academic director into a set of content development tasks intended for assignment to appropriate resources.

During a resource planning and assignment phase 1608, a set of contributors with respective roles can be identified, and content development tasks can be assigned (e.g., by the program manager) to appropriate contributors based on the contributors' roles, availabilities, and/or other characteristics. Each content development tasks can further be associated, as desired, with any other suitable information, such as estimated resource needs, deadlines, dependencies (whether beginning or completing other content development tasks must complete before or can only begin after a particular content development task), etc. In some embodiments, assignment of content development tasks to particular contributors in particular roles can include separately defining one or more contributors to author the content and defining one or more contributors to receive and/or incorporate feedback (e.g., corrections, etc.). Some resource planning and assignment tasks can involve further coordination between academic director roles and project manager roles. For example, determinations can be made regarding how best to break knowledge entities into lesson datagraph microstructures, whether and how to define quotas of practice datagraph microstructures (e.g., define a minimum number of practice items that must be included for a particular knowledge entity before the knowledge entity is ready for release), etc.

Some implementations assign values to tasks according to any suitable metrics. For example, it may be valuable to get a practice datagraph microstructure developed to a certain stage before assigning development of associated responses and/or explanations; but once those dependent tasks have been assigned, it may become highly valuable to finish development of the responses and explanations so that the practice datagraph microstructure can be ready for release. Such values can be used to define and/or determine critical path timing for the project workflow and/or to de-conflict assignment prioritizations (e.g., where two tasks have the same priority, but one may have a higher value, so that assignments can be de-conflicted in context of limited resources). For the sake of illustration, authors can be assigned tasks, including composing a new content item, composing a variant for a content item (e.g., for a knowledge entity, an explanation, etc.), correcting a mistake in a content item (e.g., as found by a moderator, by automated feedback analysis, etc.), clarifying an explanation detected as unclear or insufficient (e.g., by moderators, by automated feedback analysis, etc.), integrating previously unrecognized but valuable (e.g., unique, popular, etc.) free responses into a response set, adding or augmenting explanations to responses (e.g., to those that are popular but lack sufficient explanation), adding or elaborating diagnostics for a popular mistake, etc. Illustrative tasks for moderators can include rating various aspects of new content, reviewing content flagged by a student, ascertaining whether a student's self-assessments are correct (e.g., when indicating an essential match between a free response and one of the predefined responses in the “unrecognized-response” menu), etc.

Having designed and assigned the content development tasks, embodiments can continue with a content development phase 1612. Content development is typically an iterative process including content authoring 1616 and proactive content moderating 1620. The content authoring can include design and implementation of any of the types of content items described above, for example, ranging from virtual course boundaries and course datagraph macrostructures, to embedded lesson and practice datagraph microstructures, to their respective step objects, responses, explanations, and so on. Further, any content item can be divided into multiple sub-tasks, which can be individually assigned and developed. For example, a single lesson datagraph microstructure can include content authored by one contributor and an embedded descriptive animation authored by a different contributor.

The proactive content moderating 1620 can include any suitable types of moderation performed prior to release of the content. One type of proactive content moderating 1620 can be guideline review. For example, content can be reviewed for legal compliance (e.g., to avoid copyright infringement, etc.), style conformity (e.g., so that all the content has a similar voice and style), and/or other compliance (e.g., to avoid profanity, slang, etc.). Another type of proactive content moderating 1620 can be academic review. For example, content can be reviewed to ensure that facts, formulae, and/or other concepts are presented accurately, at an appropriate level, etc. Another type of proactive content moderating 1620 can be editing and/or proofreading. For example, content can be reviewed to avoid typographical errors, grammatical or spelling errors, etc. Some implementations of the proactive content moderating 1620 can include a defined review and approval process. For example, the workflow can be designed so that any new content must be reviewed and approved by co-authors and/or peers prior to being considered for release or further moderation. Some implementations can design the review process to depend on a contributor's status. For example, new content trusted contributors can be eligible for release immediately after submission (e.g., and or after limited proactive content moderating 1620), while new content from non-trusted authors may first enter a queue for peer-review by trusted authors. The proactive content moderating 1620 can be performed in any suitable order. For example, the workflow can be designed to enforce a particular order (e.g., academic review precedes guideline review, which precedes proofing).

Some embodiments of the content development phase 1612 include automated localization of content. Implementations include rules and/or roles for translating and/or localizing content to conform with a particular language, culture, etc. For the sake of illustration, a mathematical word problem is authored in English using common American names. Along with translating the mathematical word problem to Chinese, rules can recognize the presence of the common American names and can automatically change (or suggest changing) the names to common Chinese names. In some implementations, content items identified for localization can be presented to contributors having a translator and/or localizer role. Some such implementations can further assign each content item identified for localization to multiple translators and/or localizers, and can compare their respective output to look for matches and differences. Similarities between the translations and/or localizations can be used to automatically approve the translation and/or to increase a trust or reputation metric of a corresponding contributor. Differences between the translations and/or localizations can cause the translations to automatically be rejected (or otherwise flagged), and certain implementations can further assign the tasks to additional translators and/or localizers, as appropriate, until some convergence of output is realized. For example, suppose initial translation output yields translation “A” from translator A and translation “B” from translator B (i.e., the translations are different). The task can be assigned to an additional translator C, who outputs something sufficiently identical to translation “A”. Accordingly, translators A and C may increase in reputation score, while translator B may decrease in reputation score. Notably, some such approaches further account for the previous reputation of the translators. For example, if translators A and C originally had a very low reputation score, and translator B initially had a very high reputation score, implementations can respond by seeking additional review, effecting a smaller change to the reputations of all the translators, etc. Implementations can distribute content items identified for localization in any suitable manner. For example, items can be distributed randomly or deliberately according to rules intended to mitigate “gaming” the system (e.g., by geographically distributing assignments, so that multiple contributors are unlikely to be in contact or collusion).

When the content has been developed and initially approved for release, content items can enter a content release phase 1624. The content release phase 1624 can include any preparations for publishing content. For example, some implementations load the developed content to an appropriate content data store (e.g., a central data server) and apply appropriate access controls (e.g., privileges, etc.) to the content, so that the content becomes accessible to students.

Traditional e-learning environments tend to release content as a particular content version, after which the content stays substantially static (e.g., until a new wholesale revision is released as a new version, or the like). For example, such traditional environments typically include little or no mechanism for receiving and responding to feedback after the content release phase 1624. Some embodiments described herein include a novel post-release workflow.

Subsequent to content release, embodiments can continue with a reactive content moderation phase 1628. The reactive content moderation phase 1628 can include feedback monitoring 1632 and reactive content moderating 1636. The feedback monitoring 1632 can include automated gathering and analysis of any of the various types of feedback described herein. Novel techniques are described herein for computing efficacy of various types of content based on monitoring consumption of datagraph structures by multiple (e.g., large numbers of) students. Some such types of automated feedback are described with reference to dynamic knowledge level adaptations, self-constructing content, dynamic contribution valuation, etc. For example, embodiments can dynamically determine a deficiency in practice items for a particular knowledge entity (e.g., whether there are too few practice items, whether the practice items are not well distributed across knowledge levels, whether practice items lack sufficient responses and/or explanations for responses, etc.). The dynamic determinations can be supplemented by explicit feedback (e.g., flagging) of deficiencies by students, etc.

Each feedback monitoring 1632 result can drive an associated workflow. In some embodiments, the feedback monitoring 1632 result can be automatically translated into a flag for a program manager or moderator. In some implementations, the flag can include an auto-generated recommendation. The program manager or moderator that receives the feedback monitoring 1632 result can open and assign an appropriate task. As described above, opening and assigning the task can involve additional functions, such as assigning timeframes and/or values to the tasks, etc. For example, if the feedback monitoring 1632 result indicates that there do not appear to be sufficient numbers of practice items associated with a knowledge entity (e.g., the knowledge entity adaptor 640 of FIG. 6 has repeatedly been unable to locate a suitable next practice item to provide at an appropriate difficulty level), a recommendation can be automatically communicated to the program manager to increase an assigned quota of practice items for the knowledge entity, to self-construct additional practice items, etc. As another example, the feedback monitoring 1632 result can indicate that a particular explanation is confusing (e.g., the scoring processor 1310 of FIG. 13 has computed a low effectiveness score 1315 because the explanation has been correlated with poor performance or has not been shown to have a meaningful impact on performance), that a practice datagraph microstructure does a poor job of differentiating student knowledge, etc. In such cases, a recommendation can be automatically issued to the program manager to assign a task of improving the quality of the explanation, improving or removing the offending practice datagraph microstructure, etc. As another example, certain self-constructing content techniques can involve receiving arbitrary interactions from many students to automatically construct sets of responses, explanations, diagnostics, etc. In some such cases, collections of arbitrary interactions or self-constructed content can be passed in a useful form to a program manager, or the program manager can be otherwise flagged to the availability of such information. The program manager can then generate and assign a task to author and/or moderate corresponding content (e.g., to author and/or moderate a set of incorrect responses to a practice item based on arbitrary interaction data and/or based on self-constructed responses). Some implementations can force certain actions in context of certain types of feedback monitoring 1632 results. For example, when a content item is flagged as including poor content, profanity, copyright violations, etc., embodiments can automatically remove or suspend access to the flagged content immediately, after some threshold number of flags are received, etc. A program manager or moderator may also be informed to determine whether manual intervention or additional recourse is warranted.

It is noted that, throughout the workflow (e.g., throughout the content development phase 1612 and the reactive content moderation phase 1628), embodiments can continually monitor the status of open items, determine whether to re-open and/or re-assign items, etc. This is illustrated as resource monitoring and reassignment 1640. For example, when a content item is flagged as incorrect, insufficient, or otherwise incomplete, resource monitoring and reassignment 1640 can determine whether to reassign the task to the same or a different contributor based, for example, on the type of feedback, availability of contributors, priorities of outstanding tasks, etc.

As in the initial resource planning and assignment phase 1608, various techniques can compute, for each desired content authoring and/or moderation task, a priority per contributor (author or moderator). Some implementations can support a flexible, dynamic rule set to allow optimization of the available content authoring and moderations resources. For example, if two very high-reputation moderators have already approved a content item, one implementation may assign the item to another high-reputation moderator with lower priority, but may assign the item to a lower-reputation moderator with higher priority. As another example, tasks related to new authors or moderators can be assigned higher priority to facilitate faster identification and/or promotion of good performers. As another example, tasks related to content that is expected to have high value in the system (e.g., having high levels of consumption, having many dependencies, etc.) can be assigned higher priorities. As another example, a translation task for a slight upgrade of an explanation may be given a lower priority, whereas a change to an explanation flagged as stemming from a fundamental error can be assigned a high priority.

Some embodiments can organize tasks into pools that may be assigned to particular contributors, groups of contributors, roles, etc. For example, the tasks can be assigned by priority, as “first grab first do,” or in any suitable manner. When a contributor pulls a task from a pool, some implementations assign a particular timeframe for completion of the task. For example, failing to complete the task within the timeframe can cause the task to be automatically flagged for review, automatically reassigned, automatically returned to the pool, etc. In such cases, the reputation score of the contributor failing to complete the tasks can be negatively impacted.

Some embodiments include additional functionality, such as contributor moderating 1644. As illustrated, some implementations continually moderate contributor quality, for example, as part of the proactive content moderating 1620 and/or reactive content moderating 1636 functions. Some contributor moderating can be based on reputation scores. For example, reputation scores can be calculated and can evolve for each contributor (e.g., authors, moderators, etc.), and can be segmented by any suitable metric. For example, a particular contributor can have a high reputation score for authoring practice items, but a low reputation score for authoring knowledge entities; a particular contributor can have a high reputation score for authoring mathematics content, but a low reputation score for authoring language arts content; a particular contributor can have a high reputation score for authoring formulaic types of practice items, but a low reputation score for authoring prose-based practice items; etc.

Reputation scores can be impacted in any suitable manner. For example, reputation scores for authors can be impacted directly by ratings they receive from moderators and/or consumers (e.g., students) of their contributed items. In some implementations, the impact of particular ratings on a contributor can be weighted by the reputation score of the rater. For example, ratings from a moderator with a high reputation score can have a larger impact than those from moderators with lower reputation scores. Further, some implementations compare ratings from different raters to determine similarities and differences; where multiple raters provide similar feedback about a particular contributor, the ratings can collectively make a larger impact than they would have individually (e.g., on the reputation score of the target contributor and/or on the reputation scores of the raters). Some implementations can promote students to moderators under particular conditions. For example, when embodiments determine that a student has demonstrated sufficient proficiency in a certain aspect, the student can be automatically prompted to become a moderator for that aspect (e.g., where the aspect can include a category of academic concepts, like statistics or French grammar; a type of feedback, such as accurate assessment of whether a certain explanation is clear or unclear; etc.). In some implementations, certain roles (e.g., system operators, project managers. etc. can have full or partial ability to unilaterally adjust the reputation of a contributor. As described above, reputation scores can be used to drive approval processes, etc. For example, if the reputation of an author (or of a first few moderators) exceeds a certain threshold, new content can be considered approved for release before completing a full proactive content moderating 1620 (e.g., in some such cases, remaining proactive content moderating 1620 functions can be performed subsequent to content release).

Tutoring Flow

Embodiments described herein can be implemented in many different ways. One type of implementation is to create a virtual private tutor with which a student can effectively “dialogue” during consumption of a course. Using adaptations described herein, the virtual tutor can continuously attempt to understand the student, while simultaneously attempting to keep the student engaged and improving his knowledge in an optimal way. For example, the virtual private tutor can determine which parts of all of the content are “legal”/“eligible” for being presented to the student at a given point in time, by checking whether all prerequisites of an item have either been met or are considered to be prior knowledge which has not yet been encountered (e.g., by multiple negative impacts being sent to the prior knowledge entity). Some implementations can choose from (or rank) all eligible content using a large set of considerations, including, for example, pedagogical considerations, current estimated mastery level for each of the content items in the course, calculated as a function of the complete historical performance of the student in relation to each item (e.g., time of each exposure (direct or indirect, for example, via impact), exposure results (for example, success/failure/neutral/partial success after some failures, etc.), confidence level of the student in each response, the subjective understanding levels indicated by the student, etc.), “cool-offs” (e.g., putting a topic that has just been learnt/practice on a temporary hold so that the next exposure to that topic can be after some calendar time has passed, allowing for the previous learning of that topic to “sink in”), task inventory (e.g., all else being equal, a topic with more available unseen tasks will be preferred, as it, for example, allows to distribute the content more evenly across the calendar time up until the course completion data/final exam), learning path strategy, time constraints, upcoming midterm or finals goals, learning versus practice ratio, ratio of topics, forgetting curves, controlled experiments for testing effectiveness (e.g., for improving the overall system even at the expense of the individual student, student parameters, like performance, age, gender, etc.; reputation of the content authors/moderators responsible for it; related tasks; biofeedback, such as anxiety or boredom; course customizations by a local chief; etc.), etc. With a choosing operation mode, embodiments can use an advanced neural network, machine learning, kernel method, decision tree, or other suitable technique to choose the next most desirable content item. In the ranking operation mode, the ranking function itself can be subject to iterations and evolution. For example, system developers can periodically add new function variations, and the system can automatically compete each function against others by using each function for a random subset of users and comparing the performance of each subset after a predefined period of time and/or learning progress. The function yielding a higher average performance metric for its users can be declared as the winner, and can subsequently be used for all users until a new ranking function is added.

Some embodiments permit a student to learn in multiple modes. For example, implementations can include an “auto-pilot” mode (e.g., the top ranked piece of content is auto-provided), “semi-automatic” mode (e.g., up to a fixed number of top-ranked items are given as options for the student), “manual” mode (e.g., all items can be chosen, with certain non-eligible items potentially requiring a fulfillment of eligibility requirements), “Target-mode” (e.g., a non-eligible item can be chosen, the entire dependency tree is calculated to identify all of the entities that need to be acquired and/or mastered to the respective level, and then these entities are given in order of subsequent eligibility so that the student can reach the target entity in the shortest amount of time with all of the prerequisites being met), etc. As described herein, various adaptations are possible. According to one such adaptation, macro-variations can be provided for chosen pieces of content (e.g., based on performance history, correlations, mental attributes/tags, moderation status, etc.). For example, whenever there is more than one variation available for a micro-lesson, the system will choose among the variations (e.g., using a ranking or choosing function); and before there is sufficient data on any variation set to have statistical significance of any variation being superior for any given user profile, the variations can be rotated (e.g., round-robin style). According to another such adaptation, micro-variations can be provided. For example, variations of prompts can be either rotated (as round-robin, in a weighted mode based on the level of confidence the system already has for each variation, etc.), or chosen using the same methods as for the macro-variations.

Some embodiments decide when and how to present time awareness training (TAT). For example, the system starts out by having a global default rule for when to display the time awareness training. The rule can contain such criteria as a minimum number of items shown since the last appearance of the TAT, types of content for which it is allowed to display the TAT, etc. In some implementations, the user may be allowed, when the TAT appears, to skip it (a “skip event”). In this case, the system may automatically decrease the frequency in which the TAT appears and/or further limit the types of content for which the TAT may appear (e.g., only showing the TAT for more difficult practice items). Whenever the user does respond to the TAT, the frequency may increase, according to a formula (e.g., out of various options fed into the system by the developers, and updated from time to time) that the system discovers to yield the least percentage of TAT “skip events.” The user's inclination to see the TAT can be detected with ever increasing precision. In addition, a function that weights all of the users' TAT preferences (e.g., frequency and content types), with the weights being the statistical confidence level for each student's TAT preference, can be used to determine the global default TAT preferences that will be used for new students for which there is not yet sufficient data on this preference.

Some embodiments exploit biofeedback for adaptations. For example, embodiments can receive and save input from various sensors, including skin-conductivity, skin-acidity, skin-moisture, temperature, pulse rate, EEG, muscle tension, and/or other sensors. Embodiments can continuously search for correlations between the sensors' data and the conventional interpretations of such data (e.g., that an increase in moisture, lowering in temperature, and increase in pulse rate, indicates mental tension) and the student's performance in order to detect the student's personal learning-related tendencies and feeds that back to the ranking/choosing methods (e.g., detecting that challenging one student at significantly above his current level, or presenting to that student questions from a specific topic creates an increase in tension and thereby decreases performance). In such a case, the implementations can subsequently choose to challenge that student less and to show that anxiety-inducing topic at significantly lower difficulty levels than would otherwise be presented. This can allow for rapidly modifying the system choosing/ranking by getting an immediate biofeedback that correlates to a future performance increase/decrease, instead of waiting for that future performance change to be actually measured and only then starting to respond to it. Similarly, embodiments can infer levels of alertness (e.g., or drowsiness, boredom, etc.) using standard methods, and then attempt to bring them into the desired ranges for effective learning (e.g., by choosing/ranking-higher content items that are known to be stimulating for previous students, when a student is now starting to become bored or drowsy).

Some embodiments adapt to performance trends. For example, student encouragement can be automatically impacted by detecting that the student is experiencing a positive or negative streak, or the like. Similarly, reminders and/or motivators for coming back to study can be adapted to detected correlations between certain times of day, frequencies, etc. For example, when a clear correlation is determined between certain times of the day (or week, month, etc.), frequency and/or content of reminder alerts, spacing of new knowledge versus review, and/or other parameters can be adapted accordingly (e.g., to individual students and/or globally).

Some implementations include a confidence level controller. For example, embodiments can permit (or require) a student to indicate (e.g., by the hover position of the mouse) the student's confidence level in each of the student's responses (except in some degenerate cases, e.g., when only a single response option is available at some point in time). This indication can be obtained before the student gets the feedback on whether or not the respective response is correct or incorrect. Other embodiments can provide an understanding level controller. For example, the system permit or require a student to indicate (e.g., by dragging the mouse across a rating scale) the student's subjective level of understanding a topic, at various points from the time immediately at the completion of the initial micro-lesson, and throughout the subsequent learning. Some embodiments further provide adaptive challenges, for example, by competing against peers or against the clock (e.g., with indicators of record time for that task, 2nd place, etc.). For example, an implementation can optionally adapt the type and level of challenges presented to each student at each point in time. Performance is measured and correlated to the challenges given in order to adjust the challenges for yielding optimal student performance (e.g., a student may exhibit a monotonously increasing performance as the challenges difficulty levels increase to 120% of the estimated student's mastery level for a given topic at a given time, and decreasing performance when going beyond 120%, as when the student becomes overwhelmed or anxious, etc.). Different students are found to have different challenge levels for maximum performance, and the average values across the entire student base (or per-course, or per-demography, etc. as the case may be) are used as the default value from which the system starts searching for the personal optimum of each new student.

Some embodiments can adapt to an available time window (e.g., for scheduled sessions). For example, the system, when used in the learning-session mode (i.e., when the student requests/expects the learning session to last a certain number of minutes/end at a certain time), ensures that there is a very high probability of the learning session ending very close, and usually slightly before, the desired/expected end-time. This is achieved by considering historical timing data for all of the content items which are being evaluated for being presented in the session and factoring them into the ranking/choosing. For example, in the early parts of the session, there would be a relative advantage to items which took previous students a higher average time and had higher standard deviations. In some cases, each specific student's timing can be compared to the general student population's timing (e.g., the average and standard deviation times of completing each item), to calculate a personal coefficient for the specific student that can be used to calibrate the expected value and confidence level (e.g., based on the standard deviation) predictions per item for that student. Individual timing predictions (per item per student, after calibration) can be used (e.g., near the very end of the session, when there can be a very strong advantage to items whose expected timing is very close to the remaining time of the session, and with maximum confidence or minimum standard deviation).

Some implementations can include a countdown timer (CDT). For example, embodiments can optionally show a timer that counts down for the student for any step (e.g., when a practice item is presented). The system can adapt the type of content items for which the CDT is shown, the frequency in which the CDT is shown, and the initial time which the CDT starts counting from for each student, by analyzing the correlation between the appearance of the CDT and students' performance (both in general, for calculating the optimal default values, and per student). When the CDT reaches zero (the student runs out of time) the student is shown (if available) a default step that corresponds to an (invisible to the student) “I ran out of time” response, which may include its own impacts.

Some other embodiments can adapt example domains and present-events for content. For example, pieces of content (e.g., entire entities and even substrings, images, etc. inside step prompts) can be “tagged” as belonging to certain knowledge domains/fields of interest etc. (e.g., “sports”, “politics”, etc.). Whenever the system has variations that contain content tagged with such domains, the system looks for correlations between each such variation and engagement, timing, and performance, and adapts the choice of future variations according to the correlations detected. For example, if a student is very interested in sports but disinterested in politics, the learning data would quickly indicate the items with sports-related content variations better engage the student and/or have better timing, and/or have better success rates and/or improved confidence levels (higher for correct responses and lower for incorrect responses) when compared to politics-related variations—and this can lead implementations to subsequently prefer (all else being equal) sports-related variations over the politics-related ones. Another type of adaptation involves frequency and style of self-assessment requests. For example, the system optionally asks the student from time to time for self-assessments (e.g., “highlight the sentence you didn't understand”; “say to the mic what you didn't understand”; “which of the following menu options best describes what you did not understand?”; etc.). Embodiments can adapt to the type of self-assessment presented according to the types with the highest percentage of response (as opposed to skipping the self-assessment request that is presented), and can adapt the frequency according to both global optimization and personal adjustments.

Some embodiments permit a student, for designated items or types of items, to review (“peck”) at alternate explanations within the same practice item, that correspond to the “path not taken” (i.e., to responses not chosen in previous steps). The system can also optionally adapt to the types of content and frequency in which such an option is provided to the student, based on analysis of the student's learning performance, and of all students' learning performance, as a function of previous “peeks.” For example, if for a specific student a clear correlation is discovered between the option to peek and increased performance, then the system will give this student more of this option going forward, and vice versa.

The methods disclosed herein include one or more actions for achieving the described method. The method and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims.

The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage medium may be any available tangible medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can include RAM, ROM. EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.

A computer program product may perform certain operations presented herein. For example, such a computer program product may be a computer readable tangible medium having instructions tangibly stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. The computer program product may include packaging material. Software or instructions may also be transmitted over a transmission medium. For example, software may be transmitted from a website, server, or other remote source using a transmission medium such as a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, or microwave.

Further, modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by suitable terminals and/or coupled to servers, or the like, to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.

Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items prefaced by “at least one of” indicates a disjunctive list such that, for example, a list of “at least one of A, B, or C” means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Further, the term “exemplary” does not mean that the described example is preferred or better than other examples. As used herein, a “set” of elements is intended to mean “one or more” of those elements, except where the set is explicitly required to have more than one or explicitly permitted to be a null set.

Various changes, substitutions, and alterations to the techniques described herein can be made without departing from the technology of the teachings as defined by the appended claims. Moreover, the scope of the disclosure and claims is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods, and actions described above. Processes, machines, manufacture, compositions of matter, means, methods, or actions, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or actions.