By Paul Fain
A U.S. Senate committee released an unflattering report on the for-profit college sector on Sunday, concluding a two-year investigation led by Sen. Tom Harkin, an Iowa Democrat. While the report is ambitious in scope, and scathingly critical on many points, it appears unlikely to lead to a substantial legislative crackdown on the industry — at least not during this election year.
Issued by staff from the Democratic majority of the U.S. Senate Committee on Health, Education, Labor and Pensions, the report follows six congressional hearings, three previous reports and broad document requests. The final result is voluminous, weighing in at 249 pages and accompanied by in-depth profiles of 30 for-profits. It questions whether federal investment through aid and loans is worthwhile in many of the examined colleges.
The investigation found that large numbers of students at for-profits fail to earn credentials, citing a 64 percent dropout rate in associate degree programs, for example. It also links those high dropout rates to the relatively small amount of money for-profits spend on instruction.
Response of the Association of Private Sector Colleges and Universities: http://www.career.org/iMISPublic/AM/Template.cfm?Section=Home&CONTENTID=25566&TEMPLATE=/CM/ContentDisplay.cfm
By Martin Klubeck
Using “targets” and “stretch goals” for analyzing your performance metrics is not only outdated, it’s a bad idea.
Not everyone agrees with me, though. For the past five years, in numerous online discussion groups I have offered the concept of “expectations” as a replacement for the more traditional (some would say “time-tested”) targets and stretch goals. I have also presented the concept in seminars and presentations on metrics, and it has met with decidedly mixed reviews. When I propose the use of expectations to performance measurement and performance management experts, I generally receive cautious rebuttal. The idea of not driving behavior through the careful collection, analysis, and reporting of data goes against the accepted paradigm. In contrast, many others support the concept of expectations and are ready to drop targets and stretch goals. The make-up of these two groups is telling: those who believe metrics should be used to manage behavior, and those who believe they should be used to provide insights to improvement.
Perhaps I should start at the beginning. What are “stretch goals,” “targets,” and “expectations” and what impact do they have on applying metrics in your organization?
By Jimmy Daly
The EDUCAUSE Center for Applied Research published the study (below) in 2011 as an update to their 2006 study. The results of the survey are striking and indicate that colleges should be looking to their students for guidance. Of the 3,000 students surveyed, 43 percent felt that their institution needed to use more technology, and 51 percent felt that they knew more about technology than their professors. Perhaps this isn’t a shocking finding, but it verifies that schools are right to continue investing in technology, whether it’s software, hardware or training.
More important than the technology itself is the opportunity it creates for a better learning experience. And if students believe they can learn more — and more efficiently — with added technology, shouldn’t we provide it?
By Doris U. Bolliger & Fethi A. Inan
With the growth of online courses and programs in higher education, considerable concerns emerge about student feelings of isolation and disconnectedness in the online learning environment. A research study was conducted to develop and validate an instrument that can be used to measure perceptions of connectedness of students enrolled in online programs or certification programs in higher education. The instrument consists of 25 items and has four scales: (a) community, (b) comfort, (c) facilitation, and (d) interaction and collaboration. One hundred and forty-six online learners who were enrolled in courses at a Turkish university completed the online questionnaire. Results of a factor and reliability analysis confirmed that the instrument is a valid and reliable measure of students’ perceived connectedness in an online certificate program.
By John E. Chubb & Terry M. Moe
At the recent news conference announcing edX, a $60 million Harvard-MIT partnership in online education, university leaders spoke of reaching millions of new students in India, China and around the globe. They talked of the “revolutionary” potential of online learning, hailing it as the “single biggest change in education since the printing press.”
Heady talk indeed, but they are right. The nation, and the world, are in the early stages of a historic transformation in how students learn, teachers teach, and schools and school systems are organized.
These same university leaders mentioned the limits of edX itself. Its online courses would not lead to Harvard or MIT degrees, they noted, and were no substitute for the centuries-old residential education of their hallowed institutions. They also acknowledged that the initiative, which offers free online courses prepared by some of the nation’s top professors, is paid for by university funds—and that there is no revenue stream and no business plan to sustain it.
In short, while they want to be part of the change they know is coming, they are uncertain about how to proceed. And in this Harvard and MIT are not alone. Stanford, for instance, offers a free online course on artificial intelligence that enrolls more than 150,000 students world-wide—but the university’s path forward is similarly unclear. How can free online course content be paid for and sustained? How can elite institutions maintain their selectivity, and be rewarded for it, when anyone can take their courses?